May 10 00:46:48.042203 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 9 23:12:23 -00 2025 May 10 00:46:48.042235 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:46:48.042247 kernel: BIOS-provided physical RAM map: May 10 00:46:48.042255 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 10 00:46:48.042272 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 10 00:46:48.042281 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 10 00:46:48.042290 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable May 10 00:46:48.042298 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved May 10 00:46:48.042309 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 10 00:46:48.042317 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved May 10 00:46:48.042325 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 10 00:46:48.042333 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 10 00:46:48.042341 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved May 10 00:46:48.042349 kernel: NX (Execute Disable) protection: active May 10 00:46:48.042361 kernel: SMBIOS 2.8 present. May 10 00:46:48.042389 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 May 10 00:46:48.042398 kernel: Hypervisor detected: KVM May 10 00:46:48.042407 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 10 00:46:48.042415 kernel: kvm-clock: cpu 0, msr 8c196001, primary cpu clock May 10 00:46:48.042424 kernel: kvm-clock: using sched offset of 2566883519 cycles May 10 00:46:48.042434 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 10 00:46:48.042443 kernel: tsc: Detected 2794.748 MHz processor May 10 00:46:48.042453 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 10 00:46:48.042465 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 10 00:46:48.042474 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 May 10 00:46:48.042483 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 10 00:46:48.042493 kernel: Using GB pages for direct mapping May 10 00:46:48.042502 kernel: ACPI: Early table checksum verification disabled May 10 00:46:48.042511 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) May 10 00:46:48.042520 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:46:48.042530 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:46:48.042539 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:46:48.042549 kernel: ACPI: FACS 0x000000009CFE0000 000040 May 10 00:46:48.042557 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:46:48.042566 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:46:48.042575 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:46:48.042585 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 10 00:46:48.042594 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] May 10 00:46:48.042603 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] May 10 00:46:48.042613 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] May 10 00:46:48.042627 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] May 10 00:46:48.042637 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] May 10 00:46:48.042647 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] May 10 00:46:48.042657 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] May 10 00:46:48.042666 kernel: No NUMA configuration found May 10 00:46:48.042676 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] May 10 00:46:48.042688 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] May 10 00:46:48.042698 kernel: Zone ranges: May 10 00:46:48.042708 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 10 00:46:48.042717 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] May 10 00:46:48.042726 kernel: Normal empty May 10 00:46:48.042736 kernel: Movable zone start for each node May 10 00:46:48.042745 kernel: Early memory node ranges May 10 00:46:48.042755 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 10 00:46:48.042764 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] May 10 00:46:48.042776 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] May 10 00:46:48.042785 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 10 00:46:48.042794 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 10 00:46:48.042804 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges May 10 00:46:48.042813 kernel: ACPI: PM-Timer IO Port: 0x608 May 10 00:46:48.042823 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 10 00:46:48.042832 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 10 00:46:48.042842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 10 00:46:48.042851 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 10 00:46:48.042861 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 10 00:46:48.042873 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 10 00:46:48.042882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 10 00:46:48.042891 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 10 00:46:48.042901 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 10 00:46:48.042911 kernel: TSC deadline timer available May 10 00:46:48.042920 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 10 00:46:48.042929 kernel: kvm-guest: KVM setup pv remote TLB flush May 10 00:46:48.042938 kernel: kvm-guest: setup PV sched yield May 10 00:46:48.042947 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices May 10 00:46:48.042959 kernel: Booting paravirtualized kernel on KVM May 10 00:46:48.042968 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 10 00:46:48.042978 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 10 00:46:48.042987 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 10 00:46:48.042997 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 10 00:46:48.043006 kernel: pcpu-alloc: [0] 0 1 2 3 May 10 00:46:48.043015 kernel: kvm-guest: setup async PF for cpu 0 May 10 00:46:48.043025 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 May 10 00:46:48.043034 kernel: kvm-guest: PV spinlocks enabled May 10 00:46:48.043046 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 10 00:46:48.043055 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 May 10 00:46:48.043065 kernel: Policy zone: DMA32 May 10 00:46:48.043077 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:46:48.043087 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 10 00:46:48.043097 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 10 00:46:48.043107 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 10 00:46:48.043115 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 10 00:46:48.043127 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 134796K reserved, 0K cma-reserved) May 10 00:46:48.043137 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 10 00:46:48.043146 kernel: ftrace: allocating 34584 entries in 136 pages May 10 00:46:48.043156 kernel: ftrace: allocated 136 pages with 2 groups May 10 00:46:48.043166 kernel: rcu: Hierarchical RCU implementation. May 10 00:46:48.043176 kernel: rcu: RCU event tracing is enabled. May 10 00:46:48.043187 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 10 00:46:48.043196 kernel: Rude variant of Tasks RCU enabled. May 10 00:46:48.043206 kernel: Tracing variant of Tasks RCU enabled. May 10 00:46:48.043219 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 10 00:46:48.043228 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 10 00:46:48.043238 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 10 00:46:48.043248 kernel: random: crng init done May 10 00:46:48.043257 kernel: Console: colour VGA+ 80x25 May 10 00:46:48.043278 kernel: printk: console [ttyS0] enabled May 10 00:46:48.043288 kernel: ACPI: Core revision 20210730 May 10 00:46:48.043298 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 10 00:46:48.043308 kernel: APIC: Switch to symmetric I/O mode setup May 10 00:46:48.043320 kernel: x2apic enabled May 10 00:46:48.043331 kernel: Switched APIC routing to physical x2apic. May 10 00:46:48.043340 kernel: kvm-guest: setup PV IPIs May 10 00:46:48.043349 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 10 00:46:48.043359 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 10 00:46:48.043390 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) May 10 00:46:48.043402 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 10 00:46:48.043411 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 10 00:46:48.043420 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 10 00:46:48.043439 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 10 00:46:48.043449 kernel: Spectre V2 : Mitigation: Retpolines May 10 00:46:48.043459 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 10 00:46:48.043471 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 10 00:46:48.043482 kernel: RETBleed: Mitigation: untrained return thunk May 10 00:46:48.043492 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 10 00:46:48.043501 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 10 00:46:48.043511 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 10 00:46:48.043522 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 10 00:46:48.043534 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 10 00:46:48.043544 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 10 00:46:48.043553 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 10 00:46:48.043562 kernel: Freeing SMP alternatives memory: 32K May 10 00:46:48.043572 kernel: pid_max: default: 32768 minimum: 301 May 10 00:46:48.043582 kernel: LSM: Security Framework initializing May 10 00:46:48.043592 kernel: SELinux: Initializing. May 10 00:46:48.043604 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:46:48.043615 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 10 00:46:48.043625 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 10 00:46:48.043635 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 10 00:46:48.043645 kernel: ... version: 0 May 10 00:46:48.043655 kernel: ... bit width: 48 May 10 00:46:48.043666 kernel: ... generic registers: 6 May 10 00:46:48.043676 kernel: ... value mask: 0000ffffffffffff May 10 00:46:48.043687 kernel: ... max period: 00007fffffffffff May 10 00:46:48.043700 kernel: ... fixed-purpose events: 0 May 10 00:46:48.043709 kernel: ... event mask: 000000000000003f May 10 00:46:48.043719 kernel: signal: max sigframe size: 1776 May 10 00:46:48.043729 kernel: rcu: Hierarchical SRCU implementation. May 10 00:46:48.043740 kernel: smp: Bringing up secondary CPUs ... May 10 00:46:48.043750 kernel: x86: Booting SMP configuration: May 10 00:46:48.043760 kernel: .... node #0, CPUs: #1 May 10 00:46:48.043771 kernel: kvm-clock: cpu 1, msr 8c196041, secondary cpu clock May 10 00:46:48.043781 kernel: kvm-guest: setup async PF for cpu 1 May 10 00:46:48.043791 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 May 10 00:46:48.043803 kernel: #2 May 10 00:46:48.043814 kernel: kvm-clock: cpu 2, msr 8c196081, secondary cpu clock May 10 00:46:48.043824 kernel: kvm-guest: setup async PF for cpu 2 May 10 00:46:48.043835 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 May 10 00:46:48.043845 kernel: #3 May 10 00:46:48.043855 kernel: kvm-clock: cpu 3, msr 8c1960c1, secondary cpu clock May 10 00:46:48.043865 kernel: kvm-guest: setup async PF for cpu 3 May 10 00:46:48.043875 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 May 10 00:46:48.043885 kernel: smp: Brought up 1 node, 4 CPUs May 10 00:46:48.043898 kernel: smpboot: Max logical packages: 1 May 10 00:46:48.043908 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) May 10 00:46:48.043918 kernel: devtmpfs: initialized May 10 00:46:48.043928 kernel: x86/mm: Memory block size: 128MB May 10 00:46:48.043938 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 10 00:46:48.043947 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 10 00:46:48.043957 kernel: pinctrl core: initialized pinctrl subsystem May 10 00:46:48.043967 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 10 00:46:48.043976 kernel: audit: initializing netlink subsys (disabled) May 10 00:46:48.043989 kernel: audit: type=2000 audit(1746838007.581:1): state=initialized audit_enabled=0 res=1 May 10 00:46:48.043999 kernel: thermal_sys: Registered thermal governor 'step_wise' May 10 00:46:48.044009 kernel: thermal_sys: Registered thermal governor 'user_space' May 10 00:46:48.044019 kernel: cpuidle: using governor menu May 10 00:46:48.044029 kernel: ACPI: bus type PCI registered May 10 00:46:48.044039 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 10 00:46:48.044049 kernel: dca service started, version 1.12.1 May 10 00:46:48.044059 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 10 00:46:48.044069 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 10 00:46:48.044081 kernel: PCI: Using configuration type 1 for base access May 10 00:46:48.044092 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 10 00:46:48.044101 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 10 00:46:48.044112 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 10 00:46:48.044121 kernel: ACPI: Added _OSI(Module Device) May 10 00:46:48.044132 kernel: ACPI: Added _OSI(Processor Device) May 10 00:46:48.044142 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 10 00:46:48.044152 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 10 00:46:48.044162 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 10 00:46:48.044175 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 10 00:46:48.044185 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 10 00:46:48.044196 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 10 00:46:48.044206 kernel: ACPI: Interpreter enabled May 10 00:46:48.044216 kernel: ACPI: PM: (supports S0 S3 S5) May 10 00:46:48.044226 kernel: ACPI: Using IOAPIC for interrupt routing May 10 00:46:48.044237 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 10 00:46:48.044248 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 10 00:46:48.044258 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 10 00:46:48.044471 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 10 00:46:48.044553 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 10 00:46:48.044623 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 10 00:46:48.044633 kernel: PCI host bridge to bus 0000:00 May 10 00:46:48.044715 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 10 00:46:48.044780 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 10 00:46:48.044847 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 10 00:46:48.044909 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 10 00:46:48.044970 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 10 00:46:48.045031 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] May 10 00:46:48.045092 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 10 00:46:48.045190 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 10 00:46:48.045286 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 10 00:46:48.045363 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] May 10 00:46:48.045448 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] May 10 00:46:48.045516 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] May 10 00:46:48.045586 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 10 00:46:48.045666 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 10 00:46:48.045739 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] May 10 00:46:48.045821 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] May 10 00:46:48.045893 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] May 10 00:46:48.045982 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 10 00:46:48.046051 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] May 10 00:46:48.046124 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] May 10 00:46:48.046194 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] May 10 00:46:48.046291 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 10 00:46:48.046369 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] May 10 00:46:48.046456 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] May 10 00:46:48.046526 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] May 10 00:46:48.046595 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] May 10 00:46:48.046708 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 10 00:46:48.046794 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 10 00:46:48.046908 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 10 00:46:48.047033 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] May 10 00:46:48.047127 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] May 10 00:46:48.047276 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 10 00:46:48.047406 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] May 10 00:46:48.047418 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 10 00:46:48.047426 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 10 00:46:48.047433 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 10 00:46:48.047440 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 10 00:46:48.047449 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 10 00:46:48.047456 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 10 00:46:48.047463 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 10 00:46:48.047470 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 10 00:46:48.047477 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 10 00:46:48.047484 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 10 00:46:48.047490 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 10 00:46:48.047505 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 10 00:46:48.047516 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 10 00:46:48.047525 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 10 00:46:48.047532 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 10 00:46:48.047539 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 10 00:46:48.047546 kernel: iommu: Default domain type: Translated May 10 00:46:48.047552 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 10 00:46:48.047638 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 10 00:46:48.047708 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 10 00:46:48.047810 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 10 00:46:48.047823 kernel: vgaarb: loaded May 10 00:46:48.047831 kernel: pps_core: LinuxPPS API ver. 1 registered May 10 00:46:48.047838 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 10 00:46:48.047845 kernel: PTP clock support registered May 10 00:46:48.047852 kernel: PCI: Using ACPI for IRQ routing May 10 00:46:48.047859 kernel: PCI: pci_cache_line_size set to 64 bytes May 10 00:46:48.047865 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 10 00:46:48.047872 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] May 10 00:46:48.047879 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 10 00:46:48.047899 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 10 00:46:48.047907 kernel: clocksource: Switched to clocksource kvm-clock May 10 00:46:48.047914 kernel: VFS: Disk quotas dquot_6.6.0 May 10 00:46:48.047921 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 10 00:46:48.047928 kernel: pnp: PnP ACPI init May 10 00:46:48.048043 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 10 00:46:48.048055 kernel: pnp: PnP ACPI: found 6 devices May 10 00:46:48.048074 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 10 00:46:48.048083 kernel: NET: Registered PF_INET protocol family May 10 00:46:48.048091 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 10 00:46:48.048098 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 10 00:46:48.048105 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 10 00:46:48.048112 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 10 00:46:48.048131 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 10 00:46:48.048138 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 10 00:46:48.048145 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:46:48.048152 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 10 00:46:48.048161 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 10 00:46:48.048179 kernel: NET: Registered PF_XDP protocol family May 10 00:46:48.048261 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 10 00:46:48.048362 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 10 00:46:48.048449 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 10 00:46:48.048534 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 10 00:46:48.048616 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 10 00:46:48.048698 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] May 10 00:46:48.048722 kernel: PCI: CLS 0 bytes, default 64 May 10 00:46:48.048730 kernel: Initialise system trusted keyrings May 10 00:46:48.048739 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 10 00:46:48.048747 kernel: Key type asymmetric registered May 10 00:46:48.048767 kernel: Asymmetric key parser 'x509' registered May 10 00:46:48.048774 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 10 00:46:48.048781 kernel: io scheduler mq-deadline registered May 10 00:46:48.048788 kernel: io scheduler kyber registered May 10 00:46:48.048795 kernel: io scheduler bfq registered May 10 00:46:48.048802 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 10 00:46:48.048823 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 10 00:46:48.048830 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 10 00:46:48.048837 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 10 00:46:48.048844 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 10 00:46:48.048851 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 10 00:46:48.048858 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 10 00:46:48.048865 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 10 00:46:48.048872 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 10 00:46:48.048948 kernel: rtc_cmos 00:04: RTC can wake from S4 May 10 00:46:48.048961 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 10 00:46:48.049023 kernel: rtc_cmos 00:04: registered as rtc0 May 10 00:46:48.049086 kernel: rtc_cmos 00:04: setting system clock to 2025-05-10T00:46:47 UTC (1746838007) May 10 00:46:48.049148 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 10 00:46:48.049157 kernel: NET: Registered PF_INET6 protocol family May 10 00:46:48.049164 kernel: Segment Routing with IPv6 May 10 00:46:48.049171 kernel: In-situ OAM (IOAM) with IPv6 May 10 00:46:48.049178 kernel: NET: Registered PF_PACKET protocol family May 10 00:46:48.049187 kernel: Key type dns_resolver registered May 10 00:46:48.049194 kernel: IPI shorthand broadcast: enabled May 10 00:46:48.049201 kernel: sched_clock: Marking stable (411174459, 105039106)->(568691603, -52478038) May 10 00:46:48.049208 kernel: registered taskstats version 1 May 10 00:46:48.049215 kernel: Loading compiled-in X.509 certificates May 10 00:46:48.049222 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 0c62a22cd9157131d2e97d5a2e1bd9023e187117' May 10 00:46:48.049229 kernel: Key type .fscrypt registered May 10 00:46:48.049236 kernel: Key type fscrypt-provisioning registered May 10 00:46:48.049243 kernel: ima: No TPM chip found, activating TPM-bypass! May 10 00:46:48.049251 kernel: ima: Allocated hash algorithm: sha1 May 10 00:46:48.049258 kernel: ima: No architecture policies found May 10 00:46:48.049274 kernel: clk: Disabling unused clocks May 10 00:46:48.049281 kernel: Freeing unused kernel image (initmem) memory: 47456K May 10 00:46:48.049288 kernel: Write protecting the kernel read-only data: 28672k May 10 00:46:48.049295 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 10 00:46:48.049303 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 10 00:46:48.049310 kernel: Run /init as init process May 10 00:46:48.049318 kernel: with arguments: May 10 00:46:48.049325 kernel: /init May 10 00:46:48.049331 kernel: with environment: May 10 00:46:48.049338 kernel: HOME=/ May 10 00:46:48.049345 kernel: TERM=linux May 10 00:46:48.049352 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 10 00:46:48.049364 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:46:48.049383 systemd[1]: Detected virtualization kvm. May 10 00:46:48.049392 systemd[1]: Detected architecture x86-64. May 10 00:46:48.049399 systemd[1]: Running in initrd. May 10 00:46:48.049406 systemd[1]: No hostname configured, using default hostname. May 10 00:46:48.049414 systemd[1]: Hostname set to . May 10 00:46:48.049421 systemd[1]: Initializing machine ID from VM UUID. May 10 00:46:48.049429 systemd[1]: Queued start job for default target initrd.target. May 10 00:46:48.049436 systemd[1]: Started systemd-ask-password-console.path. May 10 00:46:48.049444 systemd[1]: Reached target cryptsetup.target. May 10 00:46:48.049451 systemd[1]: Reached target paths.target. May 10 00:46:48.049460 systemd[1]: Reached target slices.target. May 10 00:46:48.049473 systemd[1]: Reached target swap.target. May 10 00:46:48.049482 systemd[1]: Reached target timers.target. May 10 00:46:48.049490 systemd[1]: Listening on iscsid.socket. May 10 00:46:48.049498 systemd[1]: Listening on iscsiuio.socket. May 10 00:46:48.049506 systemd[1]: Listening on systemd-journald-audit.socket. May 10 00:46:48.049514 systemd[1]: Listening on systemd-journald-dev-log.socket. May 10 00:46:48.049522 systemd[1]: Listening on systemd-journald.socket. May 10 00:46:48.049529 systemd[1]: Listening on systemd-networkd.socket. May 10 00:46:48.049537 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:46:48.049545 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:46:48.049552 systemd[1]: Reached target sockets.target. May 10 00:46:48.049560 systemd[1]: Starting kmod-static-nodes.service... May 10 00:46:48.049568 systemd[1]: Finished network-cleanup.service. May 10 00:46:48.049576 systemd[1]: Starting systemd-fsck-usr.service... May 10 00:46:48.049585 systemd[1]: Starting systemd-journald.service... May 10 00:46:48.049593 systemd[1]: Starting systemd-modules-load.service... May 10 00:46:48.049600 systemd[1]: Starting systemd-resolved.service... May 10 00:46:48.049608 systemd[1]: Starting systemd-vconsole-setup.service... May 10 00:46:48.049616 systemd[1]: Finished kmod-static-nodes.service. May 10 00:46:48.049624 kernel: audit: type=1130 audit(1746838008.041:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.049631 systemd[1]: Finished systemd-fsck-usr.service. May 10 00:46:48.049646 systemd-journald[199]: Journal started May 10 00:46:48.049692 systemd-journald[199]: Runtime Journal (/run/log/journal/b6360025531244f98275997cb9dea449) is 6.0M, max 48.5M, 42.5M free. May 10 00:46:48.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.047387 systemd-modules-load[200]: Inserted module 'overlay' May 10 00:46:48.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.057157 systemd-resolved[201]: Positive Trust Anchors: May 10 00:46:48.092195 systemd[1]: Started systemd-journald.service. May 10 00:46:48.092219 kernel: audit: type=1130 audit(1746838008.082:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.092233 kernel: audit: type=1130 audit(1746838008.088:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.057168 systemd-resolved[201]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:46:48.096485 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 10 00:46:48.057194 systemd-resolved[201]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:46:48.059658 systemd-resolved[201]: Defaulting to hostname 'linux'. May 10 00:46:48.110726 kernel: audit: type=1130 audit(1746838008.092:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.110746 kernel: audit: type=1130 audit(1746838008.096:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.092228 systemd[1]: Started systemd-resolved.service. May 10 00:46:48.092697 systemd[1]: Finished systemd-vconsole-setup.service. May 10 00:46:48.097019 systemd[1]: Reached target nss-lookup.target. May 10 00:46:48.112985 kernel: Bridge firewalling registered May 10 00:46:48.097951 systemd[1]: Starting dracut-cmdline-ask.service... May 10 00:46:48.111440 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 10 00:46:48.116509 systemd-modules-load[200]: Inserted module 'br_netfilter' May 10 00:46:48.121114 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 10 00:46:48.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.126398 kernel: audit: type=1130 audit(1746838008.121:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.134257 systemd[1]: Finished dracut-cmdline-ask.service. May 10 00:46:48.139047 kernel: audit: type=1130 audit(1746838008.134:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.139058 systemd[1]: Starting dracut-cmdline.service... May 10 00:46:48.141405 kernel: SCSI subsystem initialized May 10 00:46:48.148650 dracut-cmdline[218]: dracut-dracut-053 May 10 00:46:48.150783 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=39569409b30be1967efab22b453b92a780dcf0fe8e1448a18bf235b5cf33e54a May 10 00:46:48.160367 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 10 00:46:48.160423 kernel: device-mapper: uevent: version 1.0.3 May 10 00:46:48.160443 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 10 00:46:48.163037 systemd-modules-load[200]: Inserted module 'dm_multipath' May 10 00:46:48.163745 systemd[1]: Finished systemd-modules-load.service. May 10 00:46:48.169462 kernel: audit: type=1130 audit(1746838008.164:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.165553 systemd[1]: Starting systemd-sysctl.service... May 10 00:46:48.174976 systemd[1]: Finished systemd-sysctl.service. May 10 00:46:48.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.180427 kernel: audit: type=1130 audit(1746838008.176:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.222408 kernel: Loading iSCSI transport class v2.0-870. May 10 00:46:48.239425 kernel: iscsi: registered transport (tcp) May 10 00:46:48.263405 kernel: iscsi: registered transport (qla4xxx) May 10 00:46:48.263471 kernel: QLogic iSCSI HBA Driver May 10 00:46:48.291068 systemd[1]: Finished dracut-cmdline.service. May 10 00:46:48.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.293642 systemd[1]: Starting dracut-pre-udev.service... May 10 00:46:48.347415 kernel: raid6: avx2x4 gen() 26648 MB/s May 10 00:46:48.364416 kernel: raid6: avx2x4 xor() 7608 MB/s May 10 00:46:48.381419 kernel: raid6: avx2x2 gen() 26138 MB/s May 10 00:46:48.398421 kernel: raid6: avx2x2 xor() 18358 MB/s May 10 00:46:48.415417 kernel: raid6: avx2x1 gen() 25917 MB/s May 10 00:46:48.432421 kernel: raid6: avx2x1 xor() 15232 MB/s May 10 00:46:48.449415 kernel: raid6: sse2x4 gen() 14478 MB/s May 10 00:46:48.466416 kernel: raid6: sse2x4 xor() 7393 MB/s May 10 00:46:48.483404 kernel: raid6: sse2x2 gen() 16208 MB/s May 10 00:46:48.500424 kernel: raid6: sse2x2 xor() 9722 MB/s May 10 00:46:48.517412 kernel: raid6: sse2x1 gen() 12120 MB/s May 10 00:46:48.534909 kernel: raid6: sse2x1 xor() 7559 MB/s May 10 00:46:48.534965 kernel: raid6: using algorithm avx2x4 gen() 26648 MB/s May 10 00:46:48.534975 kernel: raid6: .... xor() 7608 MB/s, rmw enabled May 10 00:46:48.535672 kernel: raid6: using avx2x2 recovery algorithm May 10 00:46:48.548418 kernel: xor: automatically using best checksumming function avx May 10 00:46:48.641415 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 10 00:46:48.649878 systemd[1]: Finished dracut-pre-udev.service. May 10 00:46:48.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.651000 audit: BPF prog-id=7 op=LOAD May 10 00:46:48.651000 audit: BPF prog-id=8 op=LOAD May 10 00:46:48.652283 systemd[1]: Starting systemd-udevd.service... May 10 00:46:48.664499 systemd-udevd[401]: Using default interface naming scheme 'v252'. May 10 00:46:48.668291 systemd[1]: Started systemd-udevd.service. May 10 00:46:48.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.703225 systemd[1]: Starting dracut-pre-trigger.service... May 10 00:46:48.713348 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation May 10 00:46:48.735231 systemd[1]: Finished dracut-pre-trigger.service. May 10 00:46:48.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.765462 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:46:48.800299 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:46:48.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:48.842557 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 10 00:46:48.871141 kernel: cryptd: max_cpu_qlen set to 1000 May 10 00:46:48.871158 kernel: libata version 3.00 loaded. May 10 00:46:48.871167 kernel: ahci 0000:00:1f.2: version 3.0 May 10 00:46:48.881351 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 10 00:46:48.881385 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 10 00:46:48.881483 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 10 00:46:48.881561 kernel: scsi host0: ahci May 10 00:46:48.881662 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 10 00:46:48.881672 kernel: GPT:9289727 != 19775487 May 10 00:46:48.881681 kernel: GPT:Alternate GPT header not at the end of the disk. May 10 00:46:48.881690 kernel: GPT:9289727 != 19775487 May 10 00:46:48.881698 kernel: GPT: Use GNU Parted to correct GPT errors. May 10 00:46:48.881706 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:46:48.881715 kernel: AVX2 version of gcm_enc/dec engaged. May 10 00:46:48.881725 kernel: AES CTR mode by8 optimization enabled May 10 00:46:48.881734 kernel: scsi host1: ahci May 10 00:46:48.881826 kernel: scsi host2: ahci May 10 00:46:48.881908 kernel: scsi host3: ahci May 10 00:46:48.881991 kernel: scsi host4: ahci May 10 00:46:48.882071 kernel: scsi host5: ahci May 10 00:46:48.882154 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 May 10 00:46:48.882163 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 May 10 00:46:48.882172 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 May 10 00:46:48.882181 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 May 10 00:46:48.882190 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 May 10 00:46:48.882198 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 May 10 00:46:48.898399 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) May 10 00:46:48.907075 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 10 00:46:48.935889 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 10 00:46:48.937140 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 10 00:46:48.943484 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 10 00:46:48.952923 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:46:48.955046 systemd[1]: Starting disk-uuid.service... May 10 00:46:48.964621 disk-uuid[537]: Primary Header is updated. May 10 00:46:48.964621 disk-uuid[537]: Secondary Entries is updated. May 10 00:46:48.964621 disk-uuid[537]: Secondary Header is updated. May 10 00:46:48.969405 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:46:48.973404 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:46:48.976413 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:46:49.194282 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 10 00:46:49.194362 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 10 00:46:49.194386 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 10 00:46:49.195400 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 10 00:46:49.196416 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 10 00:46:49.198403 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 10 00:46:49.198427 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 10 00:46:49.208525 kernel: ata3.00: applying bridge limits May 10 00:46:49.210061 kernel: ata3.00: configured for UDMA/100 May 10 00:46:49.217182 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 10 00:46:49.258034 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 10 00:46:49.275302 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 10 00:46:49.275326 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 10 00:46:50.140981 disk-uuid[538]: The operation has completed successfully. May 10 00:46:50.180799 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 10 00:46:50.200297 systemd[1]: disk-uuid.service: Deactivated successfully. May 10 00:46:50.200491 systemd[1]: Finished disk-uuid.service. May 10 00:46:50.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:50.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:50.225806 systemd[1]: Starting verity-setup.service... May 10 00:46:50.242403 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 10 00:46:50.262731 systemd[1]: Found device dev-mapper-usr.device. May 10 00:46:50.274264 systemd[1]: Mounting sysusr-usr.mount... May 10 00:46:50.276159 systemd[1]: Finished verity-setup.service. May 10 00:46:50.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:50.349408 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 10 00:46:50.349542 systemd[1]: Mounted sysusr-usr.mount. May 10 00:46:50.350174 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 10 00:46:50.351155 systemd[1]: Starting ignition-setup.service... May 10 00:46:50.352323 systemd[1]: Starting parse-ip-for-networkd.service... May 10 00:46:50.401289 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:46:50.401348 kernel: BTRFS info (device vda6): using free space tree May 10 00:46:50.401358 kernel: BTRFS info (device vda6): has skinny extents May 10 00:46:50.411395 systemd[1]: mnt-oem.mount: Deactivated successfully. May 10 00:46:50.430812 systemd[1]: Finished parse-ip-for-networkd.service. May 10 00:46:50.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:50.442000 audit: BPF prog-id=9 op=LOAD May 10 00:46:50.443215 systemd[1]: Starting systemd-networkd.service... May 10 00:46:50.469588 systemd-networkd[721]: lo: Link UP May 10 00:46:50.469600 systemd-networkd[721]: lo: Gained carrier May 10 00:46:50.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:50.470049 systemd-networkd[721]: Enumeration completed May 10 00:46:50.470131 systemd[1]: Started systemd-networkd.service. May 10 00:46:50.470319 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:46:50.473628 systemd-networkd[721]: eth0: Link UP May 10 00:46:50.473633 systemd-networkd[721]: eth0: Gained carrier May 10 00:46:50.473907 systemd[1]: Reached target network.target. May 10 00:46:50.477097 systemd[1]: Starting iscsiuio.service... May 10 00:46:50.535108 systemd[1]: Started iscsiuio.service. May 10 00:46:50.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:50.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:50.536367 systemd[1]: Finished ignition-setup.service. May 10 00:46:50.538537 systemd[1]: Starting ignition-fetch-offline.service... May 10 00:46:50.540159 systemd[1]: Starting iscsid.service... May 10 00:46:50.542443 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:46:50.544191 iscsid[728]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 10 00:46:50.544191 iscsid[728]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 10 00:46:50.544191 iscsid[728]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 10 00:46:50.544191 iscsid[728]: If using hardware iscsi like qla4xxx this message can be ignored. May 10 00:46:50.544191 iscsid[728]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 10 00:46:50.544191 iscsid[728]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 10 00:46:50.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:50.546291 systemd[1]: Started iscsid.service. May 10 00:46:50.554932 systemd[1]: Starting dracut-initqueue.service... May 10 00:46:50.567855 systemd[1]: Finished dracut-initqueue.service. May 10 00:46:50.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:50.569813 systemd[1]: Reached target remote-fs-pre.target. May 10 00:46:50.569889 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:46:50.570062 systemd[1]: Reached target remote-fs.target. May 10 00:46:50.570795 systemd[1]: Starting dracut-pre-mount.service... May 10 00:46:50.581335 systemd[1]: Finished dracut-pre-mount.service. May 10 00:46:50.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:50.669330 ignition[727]: Ignition 2.14.0 May 10 00:46:50.669344 ignition[727]: Stage: fetch-offline May 10 00:46:50.669432 ignition[727]: no configs at "/usr/lib/ignition/base.d" May 10 00:46:50.669444 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:46:50.669590 ignition[727]: parsed url from cmdline: "" May 10 00:46:50.669594 ignition[727]: no config URL provided May 10 00:46:50.669598 ignition[727]: reading system config file "/usr/lib/ignition/user.ign" May 10 00:46:50.669608 ignition[727]: no config at "/usr/lib/ignition/user.ign" May 10 00:46:50.669625 ignition[727]: op(1): [started] loading QEMU firmware config module May 10 00:46:50.669630 ignition[727]: op(1): executing: "modprobe" "qemu_fw_cfg" May 10 00:46:50.680138 ignition[727]: op(1): [finished] loading QEMU firmware config module May 10 00:46:50.721640 ignition[727]: parsing config with SHA512: 42f19aaaed5a2a8403c61ad0fd271e3b71eaf923f292eb773d8db4ca1d1f1842a64e9541bb6bd7d1b3b8b8c90fe459099deb35a97cde885942d7783f899d7fa3 May 10 00:46:50.728090 unknown[727]: fetched base config from "system" May 10 00:46:50.728103 unknown[727]: fetched user config from "qemu" May 10 00:46:50.728535 ignition[727]: fetch-offline: fetch-offline passed May 10 00:46:50.728583 ignition[727]: Ignition finished successfully May 10 00:46:50.732947 systemd[1]: Finished ignition-fetch-offline.service. May 10 00:46:50.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:50.733345 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 10 00:46:50.735913 systemd[1]: Starting ignition-kargs.service... May 10 00:46:50.750489 ignition[749]: Ignition 2.14.0 May 10 00:46:50.750499 ignition[749]: Stage: kargs May 10 00:46:50.750586 ignition[749]: no configs at "/usr/lib/ignition/base.d" May 10 00:46:50.750595 ignition[749]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:46:50.754558 ignition[749]: kargs: kargs passed May 10 00:46:50.754600 ignition[749]: Ignition finished successfully May 10 00:46:50.757069 systemd[1]: Finished ignition-kargs.service. May 10 00:46:50.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:50.759510 systemd[1]: Starting ignition-disks.service... May 10 00:46:50.767564 ignition[755]: Ignition 2.14.0 May 10 00:46:50.768592 ignition[755]: Stage: disks May 10 00:46:50.768710 ignition[755]: no configs at "/usr/lib/ignition/base.d" May 10 00:46:50.768719 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:46:50.772162 ignition[755]: disks: disks passed May 10 00:46:50.772219 ignition[755]: Ignition finished successfully May 10 00:46:50.774229 systemd[1]: Finished ignition-disks.service. May 10 00:46:50.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:50.774770 systemd[1]: Reached target initrd-root-device.target. May 10 00:46:50.775968 systemd[1]: Reached target local-fs-pre.target. May 10 00:46:50.795516 systemd[1]: Reached target local-fs.target. May 10 00:46:50.795972 systemd[1]: Reached target sysinit.target. May 10 00:46:50.798618 systemd[1]: Reached target basic.target. May 10 00:46:50.800182 systemd[1]: Starting systemd-fsck-root.service... May 10 00:46:50.813328 systemd-fsck[763]: ROOT: clean, 623/553520 files, 56023/553472 blocks May 10 00:46:51.003033 systemd[1]: Finished systemd-fsck-root.service. May 10 00:46:51.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:51.006608 systemd[1]: Mounting sysroot.mount... May 10 00:46:51.042399 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 10 00:46:51.042772 systemd[1]: Mounted sysroot.mount. May 10 00:46:51.043461 systemd[1]: Reached target initrd-root-fs.target. May 10 00:46:51.044822 systemd[1]: Mounting sysroot-usr.mount... May 10 00:46:51.046176 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 10 00:46:51.046230 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 10 00:46:51.046254 systemd[1]: Reached target ignition-diskful.target. May 10 00:46:51.048608 systemd[1]: Mounted sysroot-usr.mount. May 10 00:46:51.059875 systemd[1]: Starting initrd-setup-root.service... May 10 00:46:51.066158 initrd-setup-root[773]: cut: /sysroot/etc/passwd: No such file or directory May 10 00:46:51.068243 initrd-setup-root[781]: cut: /sysroot/etc/group: No such file or directory May 10 00:46:51.081713 initrd-setup-root[789]: cut: /sysroot/etc/shadow: No such file or directory May 10 00:46:51.084709 initrd-setup-root[797]: cut: /sysroot/etc/gshadow: No such file or directory May 10 00:46:51.111925 systemd[1]: Finished initrd-setup-root.service. May 10 00:46:51.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:51.113533 systemd[1]: Starting ignition-mount.service... May 10 00:46:51.114785 systemd[1]: Starting sysroot-boot.service... May 10 00:46:51.119151 bash[814]: umount: /sysroot/usr/share/oem: not mounted. May 10 00:46:51.134148 ignition[815]: INFO : Ignition 2.14.0 May 10 00:46:51.134148 ignition[815]: INFO : Stage: mount May 10 00:46:51.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:51.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:51.149390 ignition[815]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:46:51.149390 ignition[815]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:46:51.149390 ignition[815]: INFO : mount: mount passed May 10 00:46:51.149390 ignition[815]: INFO : Ignition finished successfully May 10 00:46:51.135910 systemd[1]: Finished ignition-mount.service. May 10 00:46:51.147761 systemd[1]: Finished sysroot-boot.service. May 10 00:46:51.283105 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 10 00:46:51.289391 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (825) May 10 00:46:51.292149 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 10 00:46:51.292174 kernel: BTRFS info (device vda6): using free space tree May 10 00:46:51.292202 kernel: BTRFS info (device vda6): has skinny extents May 10 00:46:51.296437 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 10 00:46:51.298810 systemd[1]: Starting ignition-files.service... May 10 00:46:51.322006 ignition[845]: INFO : Ignition 2.14.0 May 10 00:46:51.322006 ignition[845]: INFO : Stage: files May 10 00:46:51.324512 ignition[845]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:46:51.324512 ignition[845]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:46:51.324512 ignition[845]: DEBUG : files: compiled without relabeling support, skipping May 10 00:46:51.324512 ignition[845]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 10 00:46:51.324512 ignition[845]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 10 00:46:51.332159 ignition[845]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 10 00:46:51.333803 ignition[845]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 10 00:46:51.333803 ignition[845]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 10 00:46:51.333277 unknown[845]: wrote ssh authorized keys file for user: core May 10 00:46:51.338060 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:46:51.338060 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 10 00:46:51.376556 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 10 00:46:51.627206 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 10 00:46:51.629461 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:46:51.629461 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 May 10 00:46:52.026643 systemd-networkd[721]: eth0: Gained IPv6LL May 10 00:46:52.129719 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 10 00:46:52.250995 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 10 00:46:52.250995 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 10 00:46:52.254763 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 10 00:46:52.254763 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 10 00:46:52.258352 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 10 00:46:52.258352 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:46:52.263813 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 10 00:46:52.263813 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:46:52.263813 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 10 00:46:52.263813 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:46:52.272151 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 10 00:46:52.272151 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:46:52.272151 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:46:52.272151 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:46:52.282058 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 May 10 00:46:52.582582 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 10 00:46:53.133991 ignition[845]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" May 10 00:46:53.133991 ignition[845]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 10 00:46:53.138106 ignition[845]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:46:53.138106 ignition[845]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 10 00:46:53.138106 ignition[845]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 10 00:46:53.138106 ignition[845]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 10 00:46:53.138106 ignition[845]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 10 00:46:53.138106 ignition[845]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 10 00:46:53.138106 ignition[845]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 10 00:46:53.138106 ignition[845]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 10 00:46:53.138106 ignition[845]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 10 00:46:53.188593 ignition[845]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 10 00:46:53.190506 ignition[845]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 10 00:46:53.190506 ignition[845]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 10 00:46:53.190506 ignition[845]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 10 00:46:53.199108 ignition[845]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 10 00:46:53.201199 ignition[845]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 10 00:46:53.203092 ignition[845]: INFO : files: files passed May 10 00:46:53.203914 ignition[845]: INFO : Ignition finished successfully May 10 00:46:53.205898 systemd[1]: Finished ignition-files.service. May 10 00:46:53.212748 kernel: kauditd_printk_skb: 25 callbacks suppressed May 10 00:46:53.212776 kernel: audit: type=1130 audit(1746838013.206:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.207540 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 10 00:46:53.212817 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 10 00:46:53.218341 initrd-setup-root-after-ignition[869]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 10 00:46:53.224330 kernel: audit: type=1130 audit(1746838013.217:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.224387 kernel: audit: type=1130 audit(1746838013.223:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.213714 systemd[1]: Starting ignition-quench.service... May 10 00:46:53.248008 kernel: audit: type=1131 audit(1746838013.223:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.248217 initrd-setup-root-after-ignition[872]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 10 00:46:53.216556 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 10 00:46:53.218564 systemd[1]: ignition-quench.service: Deactivated successfully. May 10 00:46:53.218670 systemd[1]: Finished ignition-quench.service. May 10 00:46:53.224554 systemd[1]: Reached target ignition-complete.target. May 10 00:46:53.248914 systemd[1]: Starting initrd-parse-etc.service... May 10 00:46:53.260967 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 10 00:46:53.261071 systemd[1]: Finished initrd-parse-etc.service. May 10 00:46:53.275664 kernel: audit: type=1130 audit(1746838013.267:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.275689 kernel: audit: type=1131 audit(1746838013.267:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.268068 systemd[1]: Reached target initrd-fs.target. May 10 00:46:53.275658 systemd[1]: Reached target initrd.target. May 10 00:46:53.276539 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 10 00:46:53.277273 systemd[1]: Starting dracut-pre-pivot.service... May 10 00:46:53.286215 systemd[1]: Finished dracut-pre-pivot.service. May 10 00:46:53.292127 kernel: audit: type=1130 audit(1746838013.286:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.287822 systemd[1]: Starting initrd-cleanup.service... May 10 00:46:53.296832 systemd[1]: Stopped target nss-lookup.target. May 10 00:46:53.297808 systemd[1]: Stopped target remote-cryptsetup.target. May 10 00:46:53.299403 systemd[1]: Stopped target timers.target. May 10 00:46:53.300967 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 10 00:46:53.307706 kernel: audit: type=1131 audit(1746838013.302:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.301059 systemd[1]: Stopped dracut-pre-pivot.service. May 10 00:46:53.302579 systemd[1]: Stopped target initrd.target. May 10 00:46:53.307779 systemd[1]: Stopped target basic.target. May 10 00:46:53.309429 systemd[1]: Stopped target ignition-complete.target. May 10 00:46:53.311002 systemd[1]: Stopped target ignition-diskful.target. May 10 00:46:53.312709 systemd[1]: Stopped target initrd-root-device.target. May 10 00:46:53.314440 systemd[1]: Stopped target remote-fs.target. May 10 00:46:53.316098 systemd[1]: Stopped target remote-fs-pre.target. May 10 00:46:53.317802 systemd[1]: Stopped target sysinit.target. May 10 00:46:53.319348 systemd[1]: Stopped target local-fs.target. May 10 00:46:53.320908 systemd[1]: Stopped target local-fs-pre.target. May 10 00:46:53.322469 systemd[1]: Stopped target swap.target. May 10 00:46:53.330677 kernel: audit: type=1131 audit(1746838013.325:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.323880 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 10 00:46:53.323981 systemd[1]: Stopped dracut-pre-mount.service. May 10 00:46:53.340290 kernel: audit: type=1131 audit(1746838013.332:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.325529 systemd[1]: Stopped target cryptsetup.target. May 10 00:46:53.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.330716 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 10 00:46:53.330815 systemd[1]: Stopped dracut-initqueue.service. May 10 00:46:53.332647 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 10 00:46:53.332734 systemd[1]: Stopped ignition-fetch-offline.service. May 10 00:46:53.340484 systemd[1]: Stopped target paths.target. May 10 00:46:53.341970 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 10 00:46:53.343455 systemd[1]: Stopped systemd-ask-password-console.path. May 10 00:46:53.344868 systemd[1]: Stopped target slices.target. May 10 00:46:53.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.346796 systemd[1]: Stopped target sockets.target. May 10 00:46:53.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.348711 systemd[1]: iscsid.socket: Deactivated successfully. May 10 00:46:53.348802 systemd[1]: Closed iscsid.socket. May 10 00:46:53.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.350546 systemd[1]: iscsiuio.socket: Deactivated successfully. May 10 00:46:53.350643 systemd[1]: Closed iscsiuio.socket. May 10 00:46:53.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.352510 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 10 00:46:53.352646 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 10 00:46:53.354567 systemd[1]: ignition-files.service: Deactivated successfully. May 10 00:46:53.354654 systemd[1]: Stopped ignition-files.service. May 10 00:46:53.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.357162 systemd[1]: Stopping ignition-mount.service... May 10 00:46:53.358087 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 10 00:46:53.358204 systemd[1]: Stopped kmod-static-nodes.service. May 10 00:46:53.361040 systemd[1]: Stopping sysroot-boot.service... May 10 00:46:53.361846 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 10 00:46:53.362015 systemd[1]: Stopped systemd-udev-trigger.service. May 10 00:46:53.364124 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 10 00:46:53.364324 systemd[1]: Stopped dracut-pre-trigger.service. May 10 00:46:53.368804 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 10 00:46:53.368874 systemd[1]: Finished initrd-cleanup.service. May 10 00:46:53.373720 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 10 00:46:53.414922 ignition[886]: INFO : Ignition 2.14.0 May 10 00:46:53.414922 ignition[886]: INFO : Stage: umount May 10 00:46:53.416805 ignition[886]: INFO : no configs at "/usr/lib/ignition/base.d" May 10 00:46:53.416805 ignition[886]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 10 00:46:53.416805 ignition[886]: INFO : umount: umount passed May 10 00:46:53.416805 ignition[886]: INFO : Ignition finished successfully May 10 00:46:53.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.416842 systemd[1]: ignition-mount.service: Deactivated successfully. May 10 00:46:53.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.416969 systemd[1]: Stopped ignition-mount.service. May 10 00:46:53.418884 systemd[1]: Stopped target network.target. May 10 00:46:53.420822 systemd[1]: ignition-disks.service: Deactivated successfully. May 10 00:46:53.420872 systemd[1]: Stopped ignition-disks.service. May 10 00:46:53.422557 systemd[1]: ignition-kargs.service: Deactivated successfully. May 10 00:46:53.422601 systemd[1]: Stopped ignition-kargs.service. May 10 00:46:53.424333 systemd[1]: ignition-setup.service: Deactivated successfully. May 10 00:46:53.424397 systemd[1]: Stopped ignition-setup.service. May 10 00:46:53.425499 systemd[1]: Stopping systemd-networkd.service... May 10 00:46:53.428467 systemd[1]: Stopping systemd-resolved.service... May 10 00:46:53.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.435417 systemd-networkd[721]: eth0: DHCPv6 lease lost May 10 00:46:53.436736 systemd[1]: systemd-networkd.service: Deactivated successfully. May 10 00:46:53.436881 systemd[1]: Stopped systemd-networkd.service. May 10 00:46:53.442000 audit: BPF prog-id=9 op=UNLOAD May 10 00:46:53.438598 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 10 00:46:53.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.438627 systemd[1]: Closed systemd-networkd.socket. May 10 00:46:53.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.442904 systemd[1]: Stopping network-cleanup.service... May 10 00:46:53.444244 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 10 00:46:53.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.444329 systemd[1]: Stopped parse-ip-for-networkd.service. May 10 00:46:53.446437 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:46:53.446490 systemd[1]: Stopped systemd-sysctl.service. May 10 00:46:53.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.448178 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 10 00:46:53.448219 systemd[1]: Stopped systemd-modules-load.service. May 10 00:46:53.450158 systemd[1]: Stopping systemd-udevd.service... May 10 00:46:53.452790 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 10 00:46:53.453200 systemd[1]: systemd-resolved.service: Deactivated successfully. May 10 00:46:53.460000 audit: BPF prog-id=6 op=UNLOAD May 10 00:46:53.453286 systemd[1]: Stopped systemd-resolved.service. May 10 00:46:53.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.460226 systemd[1]: systemd-udevd.service: Deactivated successfully. May 10 00:46:53.460482 systemd[1]: Stopped systemd-udevd.service. May 10 00:46:53.464000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.463253 systemd[1]: network-cleanup.service: Deactivated successfully. May 10 00:46:53.463358 systemd[1]: Stopped network-cleanup.service. May 10 00:46:53.465054 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 10 00:46:53.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.465092 systemd[1]: Closed systemd-udevd-control.socket. May 10 00:46:53.466473 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 10 00:46:53.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.466500 systemd[1]: Closed systemd-udevd-kernel.socket. May 10 00:46:53.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.468327 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 10 00:46:53.468364 systemd[1]: Stopped dracut-pre-udev.service. May 10 00:46:53.469942 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 10 00:46:53.469978 systemd[1]: Stopped dracut-cmdline.service. May 10 00:46:53.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.471774 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 10 00:46:53.471807 systemd[1]: Stopped dracut-cmdline-ask.service. May 10 00:46:53.474175 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 10 00:46:53.475390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 10 00:46:53.475442 systemd[1]: Stopped systemd-vconsole-setup.service. May 10 00:46:53.479197 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 10 00:46:53.479272 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 10 00:46:53.528693 systemd[1]: sysroot-boot.service: Deactivated successfully. May 10 00:46:53.528832 systemd[1]: Stopped sysroot-boot.service. May 10 00:46:53.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.530823 systemd[1]: Reached target initrd-switch-root.target. May 10 00:46:53.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:53.532296 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 10 00:46:53.532341 systemd[1]: Stopped initrd-setup-root.service. May 10 00:46:53.533529 systemd[1]: Starting initrd-switch-root.service... May 10 00:46:53.548586 systemd[1]: Switching root. May 10 00:46:53.569535 iscsid[728]: iscsid shutting down. May 10 00:46:53.570438 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). May 10 00:46:53.570473 systemd-journald[199]: Journal stopped May 10 00:46:56.609353 kernel: SELinux: Class mctp_socket not defined in policy. May 10 00:46:56.609408 kernel: SELinux: Class anon_inode not defined in policy. May 10 00:46:56.609419 kernel: SELinux: the above unknown classes and permissions will be allowed May 10 00:46:56.609429 kernel: SELinux: policy capability network_peer_controls=1 May 10 00:46:56.609439 kernel: SELinux: policy capability open_perms=1 May 10 00:46:56.609455 kernel: SELinux: policy capability extended_socket_class=1 May 10 00:46:56.609464 kernel: SELinux: policy capability always_check_network=0 May 10 00:46:56.609473 kernel: SELinux: policy capability cgroup_seclabel=1 May 10 00:46:56.609482 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 10 00:46:56.609492 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 10 00:46:56.609501 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 10 00:46:56.609512 systemd[1]: Successfully loaded SELinux policy in 38.931ms. May 10 00:46:56.609535 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.374ms. May 10 00:46:56.609546 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 10 00:46:56.609560 systemd[1]: Detected virtualization kvm. May 10 00:46:56.609570 systemd[1]: Detected architecture x86-64. May 10 00:46:56.609580 systemd[1]: Detected first boot. May 10 00:46:56.609591 systemd[1]: Initializing machine ID from VM UUID. May 10 00:46:56.609601 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 10 00:46:56.609613 systemd[1]: Populated /etc with preset unit settings. May 10 00:46:56.609624 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:46:56.609644 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:46:56.609656 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:46:56.609666 systemd[1]: iscsiuio.service: Deactivated successfully. May 10 00:46:56.609676 systemd[1]: Stopped iscsiuio.service. May 10 00:46:56.609687 systemd[1]: iscsid.service: Deactivated successfully. May 10 00:46:56.609704 systemd[1]: Stopped iscsid.service. May 10 00:46:56.609715 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 10 00:46:56.609725 systemd[1]: Stopped initrd-switch-root.service. May 10 00:46:56.609735 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 10 00:46:56.609746 systemd[1]: Created slice system-addon\x2dconfig.slice. May 10 00:46:56.609756 systemd[1]: Created slice system-addon\x2drun.slice. May 10 00:46:56.609766 systemd[1]: Created slice system-getty.slice. May 10 00:46:56.609776 systemd[1]: Created slice system-modprobe.slice. May 10 00:46:56.609786 systemd[1]: Created slice system-serial\x2dgetty.slice. May 10 00:46:56.609800 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 10 00:46:56.609810 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 10 00:46:56.609820 systemd[1]: Created slice user.slice. May 10 00:46:56.609841 systemd[1]: Started systemd-ask-password-console.path. May 10 00:46:56.609851 systemd[1]: Started systemd-ask-password-wall.path. May 10 00:46:56.609862 systemd[1]: Set up automount boot.automount. May 10 00:46:56.609872 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 10 00:46:56.609886 systemd[1]: Stopped target initrd-switch-root.target. May 10 00:46:56.609897 systemd[1]: Stopped target initrd-fs.target. May 10 00:46:56.609907 systemd[1]: Stopped target initrd-root-fs.target. May 10 00:46:56.609918 systemd[1]: Reached target integritysetup.target. May 10 00:46:56.609928 systemd[1]: Reached target remote-cryptsetup.target. May 10 00:46:56.609937 systemd[1]: Reached target remote-fs.target. May 10 00:46:56.609947 systemd[1]: Reached target slices.target. May 10 00:46:56.609957 systemd[1]: Reached target swap.target. May 10 00:46:56.609967 systemd[1]: Reached target torcx.target. May 10 00:46:56.609977 systemd[1]: Reached target veritysetup.target. May 10 00:46:56.609991 systemd[1]: Listening on systemd-coredump.socket. May 10 00:46:56.610001 systemd[1]: Listening on systemd-initctl.socket. May 10 00:46:56.610011 systemd[1]: Listening on systemd-networkd.socket. May 10 00:46:56.610021 systemd[1]: Listening on systemd-udevd-control.socket. May 10 00:46:56.610031 systemd[1]: Listening on systemd-udevd-kernel.socket. May 10 00:46:56.610041 systemd[1]: Listening on systemd-userdbd.socket. May 10 00:46:56.610051 systemd[1]: Mounting dev-hugepages.mount... May 10 00:46:56.610074 systemd[1]: Mounting dev-mqueue.mount... May 10 00:46:56.610085 systemd[1]: Mounting media.mount... May 10 00:46:56.610099 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:56.610110 systemd[1]: Mounting sys-kernel-debug.mount... May 10 00:46:56.610119 systemd[1]: Mounting sys-kernel-tracing.mount... May 10 00:46:56.610129 systemd[1]: Mounting tmp.mount... May 10 00:46:56.610140 systemd[1]: Starting flatcar-tmpfiles.service... May 10 00:46:56.610151 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:46:56.610160 systemd[1]: Starting kmod-static-nodes.service... May 10 00:46:56.610170 systemd[1]: Starting modprobe@configfs.service... May 10 00:46:56.610181 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:46:56.610195 systemd[1]: Starting modprobe@drm.service... May 10 00:46:56.610204 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:46:56.610214 systemd[1]: Starting modprobe@fuse.service... May 10 00:46:56.610224 systemd[1]: Starting modprobe@loop.service... May 10 00:46:56.610236 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 10 00:46:56.610247 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 10 00:46:56.610256 systemd[1]: Stopped systemd-fsck-root.service. May 10 00:46:56.610266 kernel: loop: module loaded May 10 00:46:56.610276 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 10 00:46:56.610290 systemd[1]: Stopped systemd-fsck-usr.service. May 10 00:46:56.610306 systemd[1]: Stopped systemd-journald.service. May 10 00:46:56.610316 kernel: fuse: init (API version 7.34) May 10 00:46:56.610326 systemd[1]: Starting systemd-journald.service... May 10 00:46:56.610336 systemd[1]: Starting systemd-modules-load.service... May 10 00:46:56.610346 systemd[1]: Starting systemd-network-generator.service... May 10 00:46:56.610356 systemd[1]: Starting systemd-remount-fs.service... May 10 00:46:56.610366 systemd[1]: Starting systemd-udev-trigger.service... May 10 00:46:56.610420 systemd[1]: verity-setup.service: Deactivated successfully. May 10 00:46:56.610436 systemd[1]: Stopped verity-setup.service. May 10 00:46:56.610447 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:56.610460 systemd-journald[1001]: Journal started May 10 00:46:56.610496 systemd-journald[1001]: Runtime Journal (/run/log/journal/b6360025531244f98275997cb9dea449) is 6.0M, max 48.5M, 42.5M free. May 10 00:46:53.630000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 10 00:46:53.760000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:46:53.760000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 10 00:46:53.760000 audit: BPF prog-id=10 op=LOAD May 10 00:46:53.760000 audit: BPF prog-id=10 op=UNLOAD May 10 00:46:53.760000 audit: BPF prog-id=11 op=LOAD May 10 00:46:53.760000 audit: BPF prog-id=11 op=UNLOAD May 10 00:46:53.790000 audit[920]: AVC avc: denied { associate } for pid=920 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 10 00:46:53.790000 audit[920]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=903 pid=920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:46:53.790000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:46:53.792000 audit[920]: AVC avc: denied { associate } for pid=920 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 10 00:46:53.792000 audit[920]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=903 pid=920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:46:53.792000 audit: CWD cwd="/" May 10 00:46:53.792000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:53.792000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:53.792000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 10 00:46:56.430000 audit: BPF prog-id=12 op=LOAD May 10 00:46:56.430000 audit: BPF prog-id=3 op=UNLOAD May 10 00:46:56.430000 audit: BPF prog-id=13 op=LOAD May 10 00:46:56.430000 audit: BPF prog-id=14 op=LOAD May 10 00:46:56.430000 audit: BPF prog-id=4 op=UNLOAD May 10 00:46:56.430000 audit: BPF prog-id=5 op=UNLOAD May 10 00:46:56.432000 audit: BPF prog-id=15 op=LOAD May 10 00:46:56.432000 audit: BPF prog-id=12 op=UNLOAD May 10 00:46:56.432000 audit: BPF prog-id=16 op=LOAD May 10 00:46:56.432000 audit: BPF prog-id=17 op=LOAD May 10 00:46:56.432000 audit: BPF prog-id=13 op=UNLOAD May 10 00:46:56.432000 audit: BPF prog-id=14 op=UNLOAD May 10 00:46:56.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.444000 audit: BPF prog-id=15 op=UNLOAD May 10 00:46:56.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.587000 audit: BPF prog-id=18 op=LOAD May 10 00:46:56.587000 audit: BPF prog-id=19 op=LOAD May 10 00:46:56.587000 audit: BPF prog-id=20 op=LOAD May 10 00:46:56.587000 audit: BPF prog-id=16 op=UNLOAD May 10 00:46:56.587000 audit: BPF prog-id=17 op=UNLOAD May 10 00:46:56.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.607000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 10 00:46:56.607000 audit[1001]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe78e4d2a0 a2=4000 a3=7ffe78e4d33c items=0 ppid=1 pid=1001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:46:56.607000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 10 00:46:56.429200 systemd[1]: Queued start job for default target multi-user.target. May 10 00:46:53.789094 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:46:56.429212 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 10 00:46:53.789328 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:46:56.433171 systemd[1]: systemd-journald.service: Deactivated successfully. May 10 00:46:53.789350 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:46:53.789400 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 10 00:46:53.789413 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=debug msg="skipped missing lower profile" missing profile=oem May 10 00:46:53.789446 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 10 00:46:53.789461 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 10 00:46:53.789701 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 10 00:46:53.789740 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 10 00:46:56.612509 systemd[1]: Started systemd-journald.service. May 10 00:46:53.789755 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 10 00:46:53.790395 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 10 00:46:53.790428 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 10 00:46:53.790444 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 10 00:46:53.790457 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 10 00:46:53.790471 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 10 00:46:53.790483 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:53Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 10 00:46:56.134049 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:56Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:46:56.134445 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:56Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:46:56.134591 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:56Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:46:56.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.134846 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:56Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 10 00:46:56.134913 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:56Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 10 00:46:56.134998 /usr/lib/systemd/system-generators/torcx-generator[920]: time="2025-05-10T00:46:56Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 10 00:46:56.613820 systemd[1]: Mounted dev-hugepages.mount. May 10 00:46:56.614750 systemd[1]: Mounted dev-mqueue.mount. May 10 00:46:56.615623 systemd[1]: Mounted media.mount. May 10 00:46:56.616471 systemd[1]: Mounted sys-kernel-debug.mount. May 10 00:46:56.617432 systemd[1]: Mounted sys-kernel-tracing.mount. May 10 00:46:56.618453 systemd[1]: Mounted tmp.mount. May 10 00:46:56.619619 systemd[1]: Finished flatcar-tmpfiles.service. May 10 00:46:56.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.620978 systemd[1]: Finished kmod-static-nodes.service. May 10 00:46:56.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.622296 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 10 00:46:56.622547 systemd[1]: Finished modprobe@configfs.service. May 10 00:46:56.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.636002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:46:56.636255 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:46:56.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.637637 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:46:56.637830 systemd[1]: Finished modprobe@drm.service. May 10 00:46:56.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.638963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:46:56.639166 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:46:56.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.640424 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 10 00:46:56.640599 systemd[1]: Finished modprobe@fuse.service. May 10 00:46:56.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.641803 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:46:56.642009 systemd[1]: Finished modprobe@loop.service. May 10 00:46:56.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.643349 systemd[1]: Finished systemd-modules-load.service. May 10 00:46:56.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.644708 systemd[1]: Finished systemd-network-generator.service. May 10 00:46:56.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.646043 systemd[1]: Finished systemd-remount-fs.service. May 10 00:46:56.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.647548 systemd[1]: Reached target network-pre.target. May 10 00:46:56.650072 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 10 00:46:56.652226 systemd[1]: Mounting sys-kernel-config.mount... May 10 00:46:56.653071 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 10 00:46:56.655042 systemd[1]: Starting systemd-hwdb-update.service... May 10 00:46:56.657241 systemd[1]: Starting systemd-journal-flush.service... May 10 00:46:56.658334 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:46:56.659461 systemd[1]: Starting systemd-random-seed.service... May 10 00:46:56.660521 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:46:56.661443 systemd[1]: Starting systemd-sysctl.service... May 10 00:46:56.663921 systemd[1]: Starting systemd-sysusers.service... May 10 00:46:56.672433 systemd-journald[1001]: Time spent on flushing to /var/log/journal/b6360025531244f98275997cb9dea449 is 14.226ms for 1101 entries. May 10 00:46:56.672433 systemd-journald[1001]: System Journal (/var/log/journal/b6360025531244f98275997cb9dea449) is 8.0M, max 195.6M, 187.6M free. May 10 00:46:56.881855 systemd-journald[1001]: Received client request to flush runtime journal. May 10 00:46:56.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:56.668020 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 10 00:46:56.681039 systemd[1]: Mounted sys-kernel-config.mount. May 10 00:46:56.882591 udevadm[1025]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 10 00:46:56.701015 systemd[1]: Finished systemd-sysctl.service. May 10 00:46:56.703715 systemd[1]: Finished systemd-udev-trigger.service. May 10 00:46:56.706436 systemd[1]: Starting systemd-udev-settle.service... May 10 00:46:56.709571 systemd[1]: Finished systemd-sysusers.service. May 10 00:46:56.812152 systemd[1]: Finished systemd-random-seed.service. May 10 00:46:56.813427 systemd[1]: Reached target first-boot-complete.target. May 10 00:46:56.883360 systemd[1]: Finished systemd-journal-flush.service. May 10 00:46:56.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:57.190851 systemd[1]: Finished systemd-hwdb-update.service. May 10 00:46:57.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:57.192000 audit: BPF prog-id=21 op=LOAD May 10 00:46:57.192000 audit: BPF prog-id=22 op=LOAD May 10 00:46:57.192000 audit: BPF prog-id=7 op=UNLOAD May 10 00:46:57.192000 audit: BPF prog-id=8 op=UNLOAD May 10 00:46:57.193767 systemd[1]: Starting systemd-udevd.service... May 10 00:46:57.213790 systemd-udevd[1027]: Using default interface naming scheme 'v252'. May 10 00:46:57.229903 systemd[1]: Started systemd-udevd.service. May 10 00:46:57.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:57.231000 audit: BPF prog-id=23 op=LOAD May 10 00:46:57.233161 systemd[1]: Starting systemd-networkd.service... May 10 00:46:57.238000 audit: BPF prog-id=24 op=LOAD May 10 00:46:57.238000 audit: BPF prog-id=25 op=LOAD May 10 00:46:57.238000 audit: BPF prog-id=26 op=LOAD May 10 00:46:57.239657 systemd[1]: Starting systemd-userdbd.service... May 10 00:46:57.271801 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 10 00:46:57.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:57.275036 systemd[1]: Started systemd-userdbd.service. May 10 00:46:57.300403 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 10 00:46:57.315684 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 10 00:46:57.332403 kernel: ACPI: button: Power Button [PWRF] May 10 00:46:57.333697 systemd-networkd[1037]: lo: Link UP May 10 00:46:57.333986 systemd-networkd[1037]: lo: Gained carrier May 10 00:46:57.334514 systemd-networkd[1037]: Enumeration completed May 10 00:46:57.334678 systemd[1]: Started systemd-networkd.service. May 10 00:46:57.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:57.336530 systemd-networkd[1037]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 10 00:46:57.337623 systemd-networkd[1037]: eth0: Link UP May 10 00:46:57.337700 systemd-networkd[1037]: eth0: Gained carrier May 10 00:46:57.348000 audit[1028]: AVC avc: denied { confidentiality } for pid=1028 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 10 00:46:57.348000 audit[1028]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55866fc694c0 a1=338ac a2=7fae53025bc5 a3=5 items=110 ppid=1027 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:46:57.348000 audit: CWD cwd="/" May 10 00:46:57.348000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=1 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=2 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=3 name=(null) inode=15382 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=4 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=5 name=(null) inode=15383 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=6 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=7 name=(null) inode=15384 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=8 name=(null) inode=15384 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=9 name=(null) inode=15385 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=10 name=(null) inode=15384 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=11 name=(null) inode=15386 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.354743 systemd-networkd[1037]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 10 00:46:57.348000 audit: PATH item=12 name=(null) inode=15384 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=13 name=(null) inode=15387 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=14 name=(null) inode=15384 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=15 name=(null) inode=15388 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=16 name=(null) inode=15384 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=17 name=(null) inode=15389 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=18 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=19 name=(null) inode=15390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=20 name=(null) inode=15390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=21 name=(null) inode=15391 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=22 name=(null) inode=15390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=23 name=(null) inode=15392 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=24 name=(null) inode=15390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=25 name=(null) inode=15393 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=26 name=(null) inode=15390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=27 name=(null) inode=15394 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=28 name=(null) inode=15390 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=29 name=(null) inode=15395 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=30 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=31 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=32 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=33 name=(null) inode=15397 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=34 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=35 name=(null) inode=15398 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=36 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=37 name=(null) inode=15399 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=38 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=39 name=(null) inode=15400 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=40 name=(null) inode=15396 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=41 name=(null) inode=15401 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=42 name=(null) inode=15381 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=43 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=44 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=45 name=(null) inode=15403 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=46 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=47 name=(null) inode=15404 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=48 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=49 name=(null) inode=15405 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=50 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=51 name=(null) inode=15406 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=52 name=(null) inode=15402 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=53 name=(null) inode=15407 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=55 name=(null) inode=15408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=56 name=(null) inode=15408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=57 name=(null) inode=15409 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=58 name=(null) inode=15408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=59 name=(null) inode=15410 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=60 name=(null) inode=15408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=61 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=62 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=63 name=(null) inode=15412 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=64 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=65 name=(null) inode=15413 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=66 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=67 name=(null) inode=15414 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=68 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=69 name=(null) inode=15415 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=70 name=(null) inode=15411 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=71 name=(null) inode=15416 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=72 name=(null) inode=15408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=73 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=74 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=75 name=(null) inode=15418 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=76 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=77 name=(null) inode=15419 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=78 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=79 name=(null) inode=15420 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=80 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=81 name=(null) inode=15421 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=82 name=(null) inode=15417 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=83 name=(null) inode=15422 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=84 name=(null) inode=15408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=85 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=86 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=87 name=(null) inode=15424 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=88 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=89 name=(null) inode=15425 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=90 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=91 name=(null) inode=15426 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=92 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=93 name=(null) inode=15427 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=94 name=(null) inode=15423 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=95 name=(null) inode=15428 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=96 name=(null) inode=15408 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=97 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=98 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=99 name=(null) inode=15430 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=100 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=101 name=(null) inode=15431 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=102 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=103 name=(null) inode=15432 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=104 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=105 name=(null) inode=15433 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=106 name=(null) inode=15429 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=107 name=(null) inode=15434 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PATH item=109 name=(null) inode=15435 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 10 00:46:57.348000 audit: PROCTITLE proctitle="(udev-worker)" May 10 00:46:57.365637 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 10 00:46:57.369826 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 10 00:46:57.370473 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 10 00:46:57.375437 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 10 00:46:57.381431 kernel: mousedev: PS/2 mouse device common for all mice May 10 00:46:57.431615 kernel: kvm: Nested Virtualization enabled May 10 00:46:57.431746 kernel: SVM: kvm: Nested Paging enabled May 10 00:46:57.432954 kernel: SVM: Virtual VMLOAD VMSAVE supported May 10 00:46:57.433004 kernel: SVM: Virtual GIF supported May 10 00:46:57.480402 kernel: EDAC MC: Ver: 3.0.0 May 10 00:46:57.510067 systemd[1]: Finished systemd-udev-settle.service. May 10 00:46:57.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:57.513200 systemd[1]: Starting lvm2-activation-early.service... May 10 00:46:57.523350 lvm[1062]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:46:57.552597 systemd[1]: Finished lvm2-activation-early.service. May 10 00:46:57.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:57.553765 systemd[1]: Reached target cryptsetup.target. May 10 00:46:57.555774 systemd[1]: Starting lvm2-activation.service... May 10 00:46:57.560254 lvm[1063]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 10 00:46:57.587517 systemd[1]: Finished lvm2-activation.service. May 10 00:46:57.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:57.588600 systemd[1]: Reached target local-fs-pre.target. May 10 00:46:57.589453 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 10 00:46:57.589475 systemd[1]: Reached target local-fs.target. May 10 00:46:57.590334 systemd[1]: Reached target machines.target. May 10 00:46:57.592391 systemd[1]: Starting ldconfig.service... May 10 00:46:57.593486 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:46:57.593528 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:57.594249 systemd[1]: Starting systemd-boot-update.service... May 10 00:46:57.596153 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 10 00:46:57.597960 systemd[1]: Starting systemd-machine-id-commit.service... May 10 00:46:57.600490 systemd[1]: Starting systemd-sysext.service... May 10 00:46:57.601679 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1065 (bootctl) May 10 00:46:57.602558 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 10 00:46:57.610760 systemd[1]: Unmounting usr-share-oem.mount... May 10 00:46:57.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:57.616522 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 10 00:46:57.617592 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 10 00:46:57.617761 systemd[1]: Unmounted usr-share-oem.mount. May 10 00:46:57.635416 kernel: loop0: detected capacity change from 0 to 205544 May 10 00:46:57.655470 systemd-fsck[1073]: fsck.fat 4.2 (2021-01-31) May 10 00:46:57.655470 systemd-fsck[1073]: /dev/vda1: 790 files, 120688/258078 clusters May 10 00:46:57.657123 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 10 00:46:57.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:57.660771 systemd[1]: Mounting boot.mount... May 10 00:46:57.692630 systemd[1]: Mounted boot.mount. May 10 00:46:57.712106 systemd[1]: Finished systemd-boot-update.service. May 10 00:46:57.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.310539 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 10 00:46:58.326399 kernel: loop1: detected capacity change from 0 to 205544 May 10 00:46:58.331495 (sd-sysext)[1078]: Using extensions 'kubernetes'. May 10 00:46:58.331904 (sd-sysext)[1078]: Merged extensions into '/usr'. May 10 00:46:58.336130 ldconfig[1064]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 10 00:46:58.348285 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:58.349634 systemd[1]: Mounting usr-share-oem.mount... May 10 00:46:58.350639 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:46:58.351683 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:46:58.353609 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:46:58.356099 systemd[1]: Starting modprobe@loop.service... May 10 00:46:58.356971 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:46:58.357082 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:58.357184 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:58.359600 systemd[1]: Mounted usr-share-oem.mount. May 10 00:46:58.360839 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:46:58.360995 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:46:58.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.362366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:46:58.362549 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:46:58.363516 kernel: kauditd_printk_skb: 231 callbacks suppressed May 10 00:46:58.363575 kernel: audit: type=1130 audit(1746838018.361:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.371808 kernel: audit: type=1131 audit(1746838018.361:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.373191 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:46:58.373322 systemd[1]: Finished modprobe@loop.service. May 10 00:46:58.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.376395 kernel: audit: type=1130 audit(1746838018.372:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.376421 kernel: audit: type=1131 audit(1746838018.372:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.381160 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:46:58.381258 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:46:58.382131 systemd[1]: Finished systemd-sysext.service. May 10 00:46:58.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.384399 kernel: audit: type=1130 audit(1746838018.380:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.384449 kernel: audit: type=1131 audit(1746838018.380:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.394198 systemd[1]: Starting ensure-sysext.service... May 10 00:46:58.396395 kernel: audit: type=1130 audit(1746838018.392:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.398113 systemd[1]: Starting systemd-tmpfiles-setup.service... May 10 00:46:58.403041 systemd[1]: Reloading. May 10 00:46:58.429518 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 10 00:46:58.431587 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 10 00:46:58.438888 systemd-tmpfiles[1085]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 10 00:46:58.449125 /usr/lib/systemd/system-generators/torcx-generator[1105]: time="2025-05-10T00:46:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:46:58.449160 /usr/lib/systemd/system-generators/torcx-generator[1105]: time="2025-05-10T00:46:58Z" level=info msg="torcx already run" May 10 00:46:58.558184 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:46:58.558206 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:46:58.576035 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:46:58.630000 audit: BPF prog-id=27 op=LOAD May 10 00:46:58.630000 audit: BPF prog-id=18 op=UNLOAD May 10 00:46:58.633267 kernel: audit: type=1334 audit(1746838018.630:162): prog-id=27 op=LOAD May 10 00:46:58.633327 kernel: audit: type=1334 audit(1746838018.630:163): prog-id=18 op=UNLOAD May 10 00:46:58.633351 kernel: audit: type=1334 audit(1746838018.633:164): prog-id=28 op=LOAD May 10 00:46:58.633000 audit: BPF prog-id=28 op=LOAD May 10 00:46:58.634000 audit: BPF prog-id=29 op=LOAD May 10 00:46:58.634000 audit: BPF prog-id=19 op=UNLOAD May 10 00:46:58.634000 audit: BPF prog-id=20 op=UNLOAD May 10 00:46:58.634000 audit: BPF prog-id=30 op=LOAD May 10 00:46:58.634000 audit: BPF prog-id=24 op=UNLOAD May 10 00:46:58.634000 audit: BPF prog-id=31 op=LOAD May 10 00:46:58.634000 audit: BPF prog-id=32 op=LOAD May 10 00:46:58.634000 audit: BPF prog-id=25 op=UNLOAD May 10 00:46:58.634000 audit: BPF prog-id=26 op=UNLOAD May 10 00:46:58.636000 audit: BPF prog-id=33 op=LOAD May 10 00:46:58.636000 audit: BPF prog-id=23 op=UNLOAD May 10 00:46:58.636000 audit: BPF prog-id=34 op=LOAD May 10 00:46:58.636000 audit: BPF prog-id=35 op=LOAD May 10 00:46:58.636000 audit: BPF prog-id=21 op=UNLOAD May 10 00:46:58.636000 audit: BPF prog-id=22 op=UNLOAD May 10 00:46:58.638932 systemd[1]: Finished ldconfig.service. May 10 00:46:58.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.653605 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:58.653855 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:46:58.655063 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:46:58.656792 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:46:58.658761 systemd[1]: Starting modprobe@loop.service... May 10 00:46:58.659679 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:46:58.659821 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:58.659942 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:58.660835 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:46:58.660946 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:46:58.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.662234 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:46:58.662332 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:46:58.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.663672 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:46:58.663773 systemd[1]: Finished modprobe@loop.service. May 10 00:46:58.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.665090 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:46:58.665181 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:46:58.666656 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:58.666844 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:46:58.668274 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:46:58.670141 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:46:58.671900 systemd[1]: Starting modprobe@loop.service... May 10 00:46:58.672787 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:46:58.672889 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:58.672982 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:58.673737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:46:58.673847 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:46:58.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.675085 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:46:58.675181 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:46:58.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.676427 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:46:58.676522 systemd[1]: Finished modprobe@loop.service. May 10 00:46:58.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.677709 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:46:58.677797 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:46:58.680054 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:58.680261 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 10 00:46:58.681248 systemd[1]: Starting modprobe@dm_mod.service... May 10 00:46:58.683144 systemd[1]: Starting modprobe@drm.service... May 10 00:46:58.684879 systemd[1]: Starting modprobe@efi_pstore.service... May 10 00:46:58.687035 systemd[1]: Starting modprobe@loop.service... May 10 00:46:58.688021 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 10 00:46:58.688216 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:58.689893 systemd[1]: Starting systemd-networkd-wait-online.service... May 10 00:46:58.691145 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 10 00:46:58.692586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 10 00:46:58.692756 systemd[1]: Finished modprobe@dm_mod.service. May 10 00:46:58.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.694296 systemd[1]: modprobe@drm.service: Deactivated successfully. May 10 00:46:58.694454 systemd[1]: Finished modprobe@drm.service. May 10 00:46:58.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.695885 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 10 00:46:58.696055 systemd[1]: Finished modprobe@efi_pstore.service. May 10 00:46:58.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.697906 systemd[1]: modprobe@loop.service: Deactivated successfully. May 10 00:46:58.698073 systemd[1]: Finished modprobe@loop.service. May 10 00:46:58.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.702528 systemd[1]: Finished systemd-tmpfiles-setup.service. May 10 00:46:58.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.724448 systemd[1]: Finished ensure-sysext.service. May 10 00:46:58.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.726906 systemd[1]: Starting audit-rules.service... May 10 00:46:58.730331 systemd[1]: Starting clean-ca-certificates.service... May 10 00:46:58.734000 audit: BPF prog-id=36 op=LOAD May 10 00:46:58.732648 systemd[1]: Starting systemd-journal-catalog-update.service... May 10 00:46:58.733640 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 10 00:46:58.733688 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 10 00:46:58.741853 systemd[1]: Starting systemd-resolved.service... May 10 00:46:58.743000 audit: BPF prog-id=37 op=LOAD May 10 00:46:58.744659 systemd[1]: Starting systemd-timesyncd.service... May 10 00:46:58.758882 systemd[1]: Starting systemd-update-utmp.service... May 10 00:46:58.760767 systemd[1]: Finished clean-ca-certificates.service. May 10 00:46:58.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.762178 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 10 00:46:58.846000 audit[1165]: SYSTEM_BOOT pid=1165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 10 00:46:58.849057 systemd[1]: Started systemd-timesyncd.service. May 10 00:46:58.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:58.850726 systemd[1]: Reached target time-set.target. May 10 00:46:58.852276 systemd[1]: Finished systemd-update-utmp.service. May 10 00:46:59.401492 systemd-timesyncd[1164]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 10 00:46:59.401776 systemd-timesyncd[1164]: Initial clock synchronization to Sat 2025-05-10 00:46:59.401392 UTC. May 10 00:46:59.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:59.402479 systemd-resolved[1162]: Positive Trust Anchors: May 10 00:46:59.402496 systemd-resolved[1162]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 10 00:46:59.402541 systemd-resolved[1162]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 10 00:46:59.403397 systemd[1]: Finished systemd-journal-catalog-update.service. May 10 00:46:59.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:59.406560 systemd[1]: Starting systemd-update-done.service... May 10 00:46:59.412655 systemd-resolved[1162]: Defaulting to hostname 'linux'. May 10 00:46:59.413678 systemd[1]: Finished systemd-update-done.service. May 10 00:46:59.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:59.415592 systemd[1]: Started systemd-resolved.service. May 10 00:46:59.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 10 00:46:59.416769 systemd[1]: Reached target network.target. May 10 00:46:59.417605 systemd[1]: Reached target nss-lookup.target. May 10 00:46:59.420000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 10 00:46:59.420000 audit[1181]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc87a791d0 a2=420 a3=0 items=0 ppid=1158 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 10 00:46:59.420000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 10 00:46:59.421567 augenrules[1181]: No rules May 10 00:46:59.422217 systemd[1]: Finished audit-rules.service. May 10 00:46:59.423227 systemd[1]: Reached target sysinit.target. May 10 00:46:59.424221 systemd[1]: Started motdgen.path. May 10 00:46:59.425098 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 10 00:46:59.426489 systemd[1]: Started logrotate.timer. May 10 00:46:59.427614 systemd[1]: Started mdadm.timer. May 10 00:46:59.428636 systemd[1]: Started systemd-tmpfiles-clean.timer. May 10 00:46:59.429842 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 10 00:46:59.429869 systemd[1]: Reached target paths.target. May 10 00:46:59.430795 systemd[1]: Reached target timers.target. May 10 00:46:59.432114 systemd[1]: Listening on dbus.socket. May 10 00:46:59.434006 systemd[1]: Starting docker.socket... May 10 00:46:59.496201 systemd[1]: Listening on sshd.socket. May 10 00:46:59.497266 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:59.497761 systemd[1]: Listening on docker.socket. May 10 00:46:59.498733 systemd[1]: Reached target sockets.target. May 10 00:46:59.499596 systemd[1]: Reached target basic.target. May 10 00:46:59.500435 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:46:59.500468 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 10 00:46:59.501642 systemd[1]: Starting containerd.service... May 10 00:46:59.503592 systemd[1]: Starting dbus.service... May 10 00:46:59.505571 systemd[1]: Starting enable-oem-cloudinit.service... May 10 00:46:59.507880 systemd[1]: Starting extend-filesystems.service... May 10 00:46:59.509211 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 10 00:46:59.510418 jq[1190]: false May 10 00:46:59.510701 systemd[1]: Starting motdgen.service... May 10 00:46:59.513103 systemd[1]: Starting prepare-helm.service... May 10 00:46:59.515247 systemd[1]: Starting ssh-key-proc-cmdline.service... May 10 00:46:59.518088 systemd[1]: Starting sshd-keygen.service... May 10 00:46:59.527552 systemd[1]: Starting systemd-logind.service... May 10 00:46:59.529355 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 10 00:46:59.532354 dbus-daemon[1189]: [system] SELinux support is enabled May 10 00:46:59.534766 extend-filesystems[1191]: Found loop1 May 10 00:46:59.534766 extend-filesystems[1191]: Found sr0 May 10 00:46:59.534766 extend-filesystems[1191]: Found vda May 10 00:46:59.534766 extend-filesystems[1191]: Found vda1 May 10 00:46:59.534766 extend-filesystems[1191]: Found vda2 May 10 00:46:59.534766 extend-filesystems[1191]: Found vda3 May 10 00:46:59.534766 extend-filesystems[1191]: Found usr May 10 00:46:59.534766 extend-filesystems[1191]: Found vda4 May 10 00:46:59.534766 extend-filesystems[1191]: Found vda6 May 10 00:46:59.534766 extend-filesystems[1191]: Found vda7 May 10 00:46:59.534766 extend-filesystems[1191]: Found vda9 May 10 00:46:59.534766 extend-filesystems[1191]: Checking size of /dev/vda9 May 10 00:46:59.529407 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 10 00:46:59.529790 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 10 00:46:59.594268 jq[1209]: true May 10 00:46:59.530387 systemd[1]: Starting update-engine.service... May 10 00:46:59.532549 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 10 00:46:59.534089 systemd[1]: Started dbus.service. May 10 00:46:59.537549 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 10 00:46:59.537707 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 10 00:46:59.537974 systemd[1]: motdgen.service: Deactivated successfully. May 10 00:46:59.538101 systemd[1]: Finished motdgen.service. May 10 00:46:59.539719 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 10 00:46:59.539844 systemd[1]: Finished ssh-key-proc-cmdline.service. May 10 00:46:59.551617 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 10 00:46:59.551639 systemd[1]: Reached target system-config.target. May 10 00:46:59.553174 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 10 00:46:59.553188 systemd[1]: Reached target user-config.target. May 10 00:46:59.629385 jq[1214]: true May 10 00:46:59.635771 tar[1213]: linux-amd64/helm May 10 00:46:59.642819 systemd-logind[1204]: Watching system buttons on /dev/input/event1 (Power Button) May 10 00:46:59.642838 systemd-logind[1204]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 10 00:46:59.643322 systemd-logind[1204]: New seat seat0. May 10 00:46:59.647355 systemd[1]: Started systemd-logind.service. May 10 00:46:59.662655 env[1215]: time="2025-05-10T00:46:59.662604774Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 10 00:46:59.673419 extend-filesystems[1191]: Resized partition /dev/vda9 May 10 00:46:59.677257 extend-filesystems[1239]: resize2fs 1.46.5 (30-Dec-2021) May 10 00:46:59.683600 update_engine[1208]: I0510 00:46:59.683225 1208 main.cc:92] Flatcar Update Engine starting May 10 00:46:59.685690 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 10 00:46:59.691245 systemd[1]: Finished systemd-machine-id-commit.service. May 10 00:46:59.719535 env[1215]: time="2025-05-10T00:46:59.719473493Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 10 00:46:59.719693 env[1215]: time="2025-05-10T00:46:59.719642941Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 10 00:46:59.720983 env[1215]: time="2025-05-10T00:46:59.720953580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 10 00:46:59.721072 env[1215]: time="2025-05-10T00:46:59.721049900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 10 00:46:59.721377 env[1215]: time="2025-05-10T00:46:59.721352678Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:46:59.721478 env[1215]: time="2025-05-10T00:46:59.721456493Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 10 00:46:59.721582 env[1215]: time="2025-05-10T00:46:59.721558915Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 10 00:46:59.721672 env[1215]: time="2025-05-10T00:46:59.721650717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 10 00:46:59.721835 env[1215]: time="2025-05-10T00:46:59.721814655Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 10 00:46:59.722155 env[1215]: time="2025-05-10T00:46:59.722135036Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 10 00:46:59.722376 env[1215]: time="2025-05-10T00:46:59.722352684Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 10 00:46:59.722461 env[1215]: time="2025-05-10T00:46:59.722439837Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 10 00:46:59.722610 env[1215]: time="2025-05-10T00:46:59.722589147Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 10 00:46:59.722701 env[1215]: time="2025-05-10T00:46:59.722680639Z" level=info msg="metadata content store policy set" policy=shared May 10 00:46:59.733052 systemd[1]: Started update-engine.service. May 10 00:46:59.755371 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 10 00:46:59.755443 update_engine[1208]: I0510 00:46:59.741147 1208 update_check_scheduler.cc:74] Next update check in 8m44s May 10 00:46:59.758220 systemd[1]: Started locksmithd.service. May 10 00:46:59.935065 systemd-networkd[1037]: eth0: Gained IPv6LL May 10 00:46:59.936971 systemd[1]: Finished systemd-networkd-wait-online.service. May 10 00:46:59.938771 systemd[1]: Reached target network-online.target. May 10 00:47:00.036863 systemd[1]: Starting kubelet.service... May 10 00:47:00.076749 sshd_keygen[1210]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 10 00:47:00.099880 systemd[1]: Finished sshd-keygen.service. May 10 00:47:00.102152 systemd[1]: Starting issuegen.service... May 10 00:47:00.108251 systemd[1]: issuegen.service: Deactivated successfully. May 10 00:47:00.108386 systemd[1]: Finished issuegen.service. May 10 00:47:00.110620 systemd[1]: Starting systemd-user-sessions.service... May 10 00:47:00.228920 systemd[1]: Finished systemd-user-sessions.service. May 10 00:47:00.231143 systemd[1]: Started getty@tty1.service. May 10 00:47:00.232454 locksmithd[1246]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 10 00:47:00.232914 systemd[1]: Started serial-getty@ttyS0.service. May 10 00:47:00.234197 systemd[1]: Reached target getty.target. May 10 00:47:00.293920 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 10 00:47:00.751397 extend-filesystems[1239]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 10 00:47:00.751397 extend-filesystems[1239]: old_desc_blocks = 1, new_desc_blocks = 1 May 10 00:47:00.751397 extend-filesystems[1239]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 10 00:47:00.757248 extend-filesystems[1191]: Resized filesystem in /dev/vda9 May 10 00:47:00.752641 systemd[1]: extend-filesystems.service: Deactivated successfully. May 10 00:47:00.752827 systemd[1]: Finished extend-filesystems.service. May 10 00:47:00.759730 tar[1213]: linux-amd64/LICENSE May 10 00:47:00.759730 tar[1213]: linux-amd64/README.md May 10 00:47:00.761833 env[1215]: time="2025-05-10T00:47:00.761767549Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 10 00:47:00.761833 env[1215]: time="2025-05-10T00:47:00.761835607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 10 00:47:00.762181 env[1215]: time="2025-05-10T00:47:00.761855023Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 10 00:47:00.762181 env[1215]: time="2025-05-10T00:47:00.761914715Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 10 00:47:00.762181 env[1215]: time="2025-05-10T00:47:00.761935745Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 10 00:47:00.762181 env[1215]: time="2025-05-10T00:47:00.761957585Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 10 00:47:00.762181 env[1215]: time="2025-05-10T00:47:00.761973215Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 10 00:47:00.762181 env[1215]: time="2025-05-10T00:47:00.761991609Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 10 00:47:00.762181 env[1215]: time="2025-05-10T00:47:00.762008060Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 10 00:47:00.762181 env[1215]: time="2025-05-10T00:47:00.762024531Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 10 00:47:00.762181 env[1215]: time="2025-05-10T00:47:00.762041503Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 10 00:47:00.762181 env[1215]: time="2025-05-10T00:47:00.762057583Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 10 00:47:00.762380 env[1215]: time="2025-05-10T00:47:00.762214798Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 10 00:47:00.762380 env[1215]: time="2025-05-10T00:47:00.762300829Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 10 00:47:00.762650 env[1215]: time="2025-05-10T00:47:00.762616752Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 10 00:47:00.762700 env[1215]: time="2025-05-10T00:47:00.762651357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 10 00:47:00.762700 env[1215]: time="2025-05-10T00:47:00.762669300Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 10 00:47:00.762749 env[1215]: time="2025-05-10T00:47:00.762721388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 10 00:47:00.762749 env[1215]: time="2025-05-10T00:47:00.762737308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 10 00:47:00.762788 env[1215]: time="2025-05-10T00:47:00.762752667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 10 00:47:00.762788 env[1215]: time="2025-05-10T00:47:00.762769188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 10 00:47:00.762788 env[1215]: time="2025-05-10T00:47:00.762784376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 10 00:47:00.762872 env[1215]: time="2025-05-10T00:47:00.762802691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 10 00:47:00.762872 env[1215]: time="2025-05-10T00:47:00.762822618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 10 00:47:00.762872 env[1215]: time="2025-05-10T00:47:00.762836935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 10 00:47:00.762872 env[1215]: time="2025-05-10T00:47:00.762854458Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 10 00:47:00.763015 env[1215]: time="2025-05-10T00:47:00.762995753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 10 00:47:00.763040 env[1215]: time="2025-05-10T00:47:00.763018055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 10 00:47:00.763040 env[1215]: time="2025-05-10T00:47:00.763034325Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 10 00:47:00.763082 env[1215]: time="2025-05-10T00:47:00.763048943Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 10 00:47:00.763082 env[1215]: time="2025-05-10T00:47:00.763068259Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 10 00:47:00.763134 env[1215]: time="2025-05-10T00:47:00.763081163Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 10 00:47:00.763134 env[1215]: time="2025-05-10T00:47:00.763103505Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 10 00:47:00.763192 env[1215]: time="2025-05-10T00:47:00.763170290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 10 00:47:00.763532 env[1215]: time="2025-05-10T00:47:00.763453181Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 10 00:47:00.764118 env[1215]: time="2025-05-10T00:47:00.763542308Z" level=info msg="Connect containerd service" May 10 00:47:00.764162 bash[1243]: Updated "/home/core/.ssh/authorized_keys" May 10 00:47:00.764250 env[1215]: time="2025-05-10T00:47:00.764196746Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 10 00:47:00.764834 env[1215]: time="2025-05-10T00:47:00.764797502Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:47:00.765089 env[1215]: time="2025-05-10T00:47:00.765062249Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 10 00:47:00.765147 env[1215]: time="2025-05-10T00:47:00.765107354Z" level=info msg=serving... address=/run/containerd/containerd.sock May 10 00:47:00.765171 env[1215]: time="2025-05-10T00:47:00.765149442Z" level=info msg="containerd successfully booted in 1.103229s" May 10 00:47:00.766135 env[1215]: time="2025-05-10T00:47:00.766057686Z" level=info msg="Start subscribing containerd event" May 10 00:47:00.766145 systemd[1]: Started containerd.service. May 10 00:47:00.767509 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 10 00:47:00.769001 systemd[1]: Finished prepare-helm.service. May 10 00:47:00.770515 env[1215]: time="2025-05-10T00:47:00.770468870Z" level=info msg="Start recovering state" May 10 00:47:00.770832 env[1215]: time="2025-05-10T00:47:00.770780966Z" level=info msg="Start event monitor" May 10 00:47:00.770832 env[1215]: time="2025-05-10T00:47:00.770824998Z" level=info msg="Start snapshots syncer" May 10 00:47:00.770971 env[1215]: time="2025-05-10T00:47:00.770845857Z" level=info msg="Start cni network conf syncer for default" May 10 00:47:00.770971 env[1215]: time="2025-05-10T00:47:00.770866837Z" level=info msg="Start streaming server" May 10 00:47:01.677765 systemd[1]: Started kubelet.service. May 10 00:47:01.679525 systemd[1]: Reached target multi-user.target. May 10 00:47:01.681995 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 10 00:47:01.690547 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 10 00:47:01.690731 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 10 00:47:01.692041 systemd[1]: Startup finished in 793ms (kernel) + 5.700s (initrd) + 7.553s (userspace) = 14.047s. May 10 00:47:02.066109 systemd[1]: Created slice system-sshd.slice. May 10 00:47:02.067611 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:35644.service. May 10 00:47:02.110212 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 35644 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:47:02.113714 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:02.123226 systemd-logind[1204]: New session 1 of user core. May 10 00:47:02.124334 systemd[1]: Created slice user-500.slice. May 10 00:47:02.126000 systemd[1]: Starting user-runtime-dir@500.service... May 10 00:47:02.147501 systemd[1]: Finished user-runtime-dir@500.service. May 10 00:47:02.149610 systemd[1]: Starting user@500.service... May 10 00:47:02.153306 (systemd)[1282]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:02.244518 systemd[1282]: Queued start job for default target default.target. May 10 00:47:02.245259 systemd[1282]: Reached target paths.target. May 10 00:47:02.245281 systemd[1282]: Reached target sockets.target. May 10 00:47:02.245297 systemd[1282]: Reached target timers.target. May 10 00:47:02.245312 systemd[1282]: Reached target basic.target. May 10 00:47:02.245441 systemd[1]: Started user@500.service. May 10 00:47:02.246019 systemd[1282]: Reached target default.target. May 10 00:47:02.246071 systemd[1282]: Startup finished in 83ms. May 10 00:47:02.246421 systemd[1]: Started session-1.scope. May 10 00:47:02.315190 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:35650.service. May 10 00:47:02.357778 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 35650 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:47:02.375596 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:02.394098 systemd-logind[1204]: New session 2 of user core. May 10 00:47:02.395147 systemd[1]: Started session-2.scope. May 10 00:47:02.449558 sshd[1292]: pam_unix(sshd:session): session closed for user core May 10 00:47:02.452642 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:35650.service: Deactivated successfully. May 10 00:47:02.453258 systemd[1]: session-2.scope: Deactivated successfully. May 10 00:47:02.454863 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:35658.service. May 10 00:47:02.455510 systemd-logind[1204]: Session 2 logged out. Waiting for processes to exit. May 10 00:47:02.456401 systemd-logind[1204]: Removed session 2. May 10 00:47:02.493032 kubelet[1271]: E0510 00:47:02.492985 1271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:47:02.494953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:47:02.495099 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:47:02.495367 systemd[1]: kubelet.service: Consumed 1.629s CPU time. May 10 00:47:02.507590 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 35658 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:47:02.509171 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:02.512920 systemd-logind[1204]: New session 3 of user core. May 10 00:47:02.513726 systemd[1]: Started session-3.scope. May 10 00:47:02.563158 sshd[1298]: pam_unix(sshd:session): session closed for user core May 10 00:47:02.566183 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:35658.service: Deactivated successfully. May 10 00:47:02.566666 systemd[1]: session-3.scope: Deactivated successfully. May 10 00:47:02.567129 systemd-logind[1204]: Session 3 logged out. Waiting for processes to exit. May 10 00:47:02.567998 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:35674.service. May 10 00:47:02.568662 systemd-logind[1204]: Removed session 3. May 10 00:47:02.601566 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 35674 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:47:02.602984 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:02.606440 systemd-logind[1204]: New session 4 of user core. May 10 00:47:02.607417 systemd[1]: Started session-4.scope. May 10 00:47:02.661939 sshd[1304]: pam_unix(sshd:session): session closed for user core May 10 00:47:02.664849 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:35674.service: Deactivated successfully. May 10 00:47:02.665510 systemd[1]: session-4.scope: Deactivated successfully. May 10 00:47:02.666074 systemd-logind[1204]: Session 4 logged out. Waiting for processes to exit. May 10 00:47:02.667082 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:35680.service. May 10 00:47:02.667787 systemd-logind[1204]: Removed session 4. May 10 00:47:02.702853 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 35680 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:47:02.704363 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:47:02.707838 systemd-logind[1204]: New session 5 of user core. May 10 00:47:02.708582 systemd[1]: Started session-5.scope. May 10 00:47:02.765491 sudo[1313]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 10 00:47:02.765749 sudo[1313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 10 00:47:02.799931 systemd[1]: Starting docker.service... May 10 00:47:02.858249 env[1325]: time="2025-05-10T00:47:02.858161565Z" level=info msg="Starting up" May 10 00:47:02.860214 env[1325]: time="2025-05-10T00:47:02.860181023Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:47:02.860214 env[1325]: time="2025-05-10T00:47:02.860203004Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:47:02.860286 env[1325]: time="2025-05-10T00:47:02.860224805Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:47:02.860286 env[1325]: time="2025-05-10T00:47:02.860238270Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:47:02.862070 env[1325]: time="2025-05-10T00:47:02.862020133Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 10 00:47:02.862070 env[1325]: time="2025-05-10T00:47:02.862052012Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 10 00:47:02.862070 env[1325]: time="2025-05-10T00:47:02.862076007Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 10 00:47:02.862278 env[1325]: time="2025-05-10T00:47:02.862086888Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 10 00:47:03.658631 env[1325]: time="2025-05-10T00:47:03.658569852Z" level=info msg="Loading containers: start." May 10 00:47:03.789938 kernel: Initializing XFRM netlink socket May 10 00:47:03.819682 env[1325]: time="2025-05-10T00:47:03.819633141Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 10 00:47:03.869433 systemd-networkd[1037]: docker0: Link UP May 10 00:47:03.897408 env[1325]: time="2025-05-10T00:47:03.897353131Z" level=info msg="Loading containers: done." May 10 00:47:03.906603 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1722004136-merged.mount: Deactivated successfully. May 10 00:47:03.909995 env[1325]: time="2025-05-10T00:47:03.909859269Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 10 00:47:03.910139 env[1325]: time="2025-05-10T00:47:03.910062350Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 10 00:47:03.910193 env[1325]: time="2025-05-10T00:47:03.910169712Z" level=info msg="Daemon has completed initialization" May 10 00:47:03.934967 systemd[1]: Started docker.service. May 10 00:47:03.942881 env[1325]: time="2025-05-10T00:47:03.942820475Z" level=info msg="API listen on /run/docker.sock" May 10 00:47:04.811081 env[1215]: time="2025-05-10T00:47:04.811019779Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 10 00:47:05.406526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2307874845.mount: Deactivated successfully. May 10 00:47:07.023044 env[1215]: time="2025-05-10T00:47:07.022961960Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:07.025116 env[1215]: time="2025-05-10T00:47:07.025077679Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:07.027168 env[1215]: time="2025-05-10T00:47:07.027101495Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:07.029094 env[1215]: time="2025-05-10T00:47:07.029050260Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:07.029919 env[1215]: time="2025-05-10T00:47:07.029854098Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" May 10 00:47:07.031826 env[1215]: time="2025-05-10T00:47:07.031788095Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 10 00:47:08.842234 env[1215]: time="2025-05-10T00:47:08.842165145Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:08.844402 env[1215]: time="2025-05-10T00:47:08.844351887Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:08.846704 env[1215]: time="2025-05-10T00:47:08.846652182Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:08.849069 env[1215]: time="2025-05-10T00:47:08.849003332Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:08.849976 env[1215]: time="2025-05-10T00:47:08.849935140Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" May 10 00:47:08.850542 env[1215]: time="2025-05-10T00:47:08.850503796Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 10 00:47:12.679430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 10 00:47:12.679624 systemd[1]: Stopped kubelet.service. May 10 00:47:12.679671 systemd[1]: kubelet.service: Consumed 1.629s CPU time. May 10 00:47:12.680903 systemd[1]: Starting kubelet.service... May 10 00:47:12.770356 systemd[1]: Started kubelet.service. May 10 00:47:12.820072 kubelet[1458]: E0510 00:47:12.820001 1458 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:47:12.822727 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:47:12.822841 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:47:16.165475 env[1215]: time="2025-05-10T00:47:16.165384371Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:16.168311 env[1215]: time="2025-05-10T00:47:16.168255467Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:16.170190 env[1215]: time="2025-05-10T00:47:16.170161412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:16.172212 env[1215]: time="2025-05-10T00:47:16.172174107Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:16.172926 env[1215]: time="2025-05-10T00:47:16.172864712Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" May 10 00:47:16.173413 env[1215]: time="2025-05-10T00:47:16.173380700Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 10 00:47:18.313357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4203981167.mount: Deactivated successfully. May 10 00:47:19.442657 env[1215]: time="2025-05-10T00:47:19.442582208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:19.444715 env[1215]: time="2025-05-10T00:47:19.444675715Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:19.446303 env[1215]: time="2025-05-10T00:47:19.446208049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:19.447563 env[1215]: time="2025-05-10T00:47:19.447512846Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:19.447986 env[1215]: time="2025-05-10T00:47:19.447945628Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" May 10 00:47:19.448533 env[1215]: time="2025-05-10T00:47:19.448494868Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 10 00:47:20.017285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1719782935.mount: Deactivated successfully. May 10 00:47:21.222971 env[1215]: time="2025-05-10T00:47:21.222864068Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:21.225604 env[1215]: time="2025-05-10T00:47:21.225574442Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:21.227921 env[1215]: time="2025-05-10T00:47:21.227900264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:21.229741 env[1215]: time="2025-05-10T00:47:21.229690953Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:21.230562 env[1215]: time="2025-05-10T00:47:21.230521080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 10 00:47:21.231105 env[1215]: time="2025-05-10T00:47:21.231082754Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 10 00:47:22.798508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1024634752.mount: Deactivated successfully. May 10 00:47:22.807295 env[1215]: time="2025-05-10T00:47:22.807215739Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:22.811735 env[1215]: time="2025-05-10T00:47:22.811654374Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:22.815550 env[1215]: time="2025-05-10T00:47:22.815496982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:22.826071 env[1215]: time="2025-05-10T00:47:22.825953936Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:22.826840 env[1215]: time="2025-05-10T00:47:22.826782119Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 10 00:47:22.827619 env[1215]: time="2025-05-10T00:47:22.827574576Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 10 00:47:22.929349 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 10 00:47:22.929554 systemd[1]: Stopped kubelet.service. May 10 00:47:22.931094 systemd[1]: Starting kubelet.service... May 10 00:47:23.030868 systemd[1]: Started kubelet.service. May 10 00:47:23.255260 kubelet[1470]: E0510 00:47:23.255182 1470 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 10 00:47:23.257030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 10 00:47:23.257158 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 10 00:47:23.769833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1573689404.mount: Deactivated successfully. May 10 00:47:27.331969 env[1215]: time="2025-05-10T00:47:27.331908218Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:27.334677 env[1215]: time="2025-05-10T00:47:27.334647126Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:27.336534 env[1215]: time="2025-05-10T00:47:27.336508016Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:27.338231 env[1215]: time="2025-05-10T00:47:27.338205951Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:27.339143 env[1215]: time="2025-05-10T00:47:27.339117050Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 10 00:47:30.822417 systemd[1]: Stopped kubelet.service. May 10 00:47:30.824966 systemd[1]: Starting kubelet.service... May 10 00:47:30.862192 systemd[1]: Reloading. May 10 00:47:30.936236 /usr/lib/systemd/system-generators/torcx-generator[1525]: time="2025-05-10T00:47:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:47:30.936589 /usr/lib/systemd/system-generators/torcx-generator[1525]: time="2025-05-10T00:47:30Z" level=info msg="torcx already run" May 10 00:47:31.375133 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:47:31.375158 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:47:31.394441 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:47:31.481161 systemd[1]: Started kubelet.service. May 10 00:47:31.482687 systemd[1]: Stopping kubelet.service... May 10 00:47:31.483032 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:47:31.483211 systemd[1]: Stopped kubelet.service. May 10 00:47:31.484620 systemd[1]: Starting kubelet.service... May 10 00:47:31.562369 systemd[1]: Started kubelet.service. May 10 00:47:31.639432 kubelet[1573]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:47:31.639432 kubelet[1573]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:47:31.639432 kubelet[1573]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:47:31.639858 kubelet[1573]: I0510 00:47:31.639437 1573 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:47:31.792682 kubelet[1573]: I0510 00:47:31.792626 1573 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 00:47:31.792682 kubelet[1573]: I0510 00:47:31.792663 1573 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:47:31.792986 kubelet[1573]: I0510 00:47:31.792963 1573 server.go:929] "Client rotation is on, will bootstrap in background" May 10 00:47:31.909098 kubelet[1573]: I0510 00:47:31.908938 1573 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:47:31.918058 kubelet[1573]: E0510 00:47:31.917855 1573 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 10 00:47:31.976482 kubelet[1573]: E0510 00:47:31.976411 1573 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:47:31.976482 kubelet[1573]: I0510 00:47:31.976454 1573 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:47:31.996362 kubelet[1573]: I0510 00:47:31.996341 1573 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:47:32.001942 kubelet[1573]: I0510 00:47:32.001901 1573 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 00:47:32.002153 kubelet[1573]: I0510 00:47:32.002103 1573 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:47:32.002353 kubelet[1573]: I0510 00:47:32.002143 1573 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:47:32.002472 kubelet[1573]: I0510 00:47:32.002367 1573 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:47:32.002472 kubelet[1573]: I0510 00:47:32.002380 1573 container_manager_linux.go:300] "Creating device plugin manager" May 10 00:47:32.002547 kubelet[1573]: I0510 00:47:32.002517 1573 state_mem.go:36] "Initialized new in-memory state store" May 10 00:47:32.015700 kubelet[1573]: I0510 00:47:32.015656 1573 kubelet.go:408] "Attempting to sync node with API server" May 10 00:47:32.015700 kubelet[1573]: I0510 00:47:32.015697 1573 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:47:32.015837 kubelet[1573]: I0510 00:47:32.015758 1573 kubelet.go:314] "Adding apiserver pod source" May 10 00:47:32.015837 kubelet[1573]: I0510 00:47:32.015785 1573 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:47:32.016519 kubelet[1573]: W0510 00:47:32.016459 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 10 00:47:32.016586 kubelet[1573]: E0510 00:47:32.016515 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 10 00:47:32.033404 kubelet[1573]: W0510 00:47:32.033217 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 10 00:47:32.033404 kubelet[1573]: E0510 00:47:32.033322 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 10 00:47:32.055230 kubelet[1573]: I0510 00:47:32.055191 1573 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:47:32.072643 kubelet[1573]: I0510 00:47:32.072586 1573 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:47:32.085057 kubelet[1573]: W0510 00:47:32.085000 1573 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 10 00:47:32.085835 kubelet[1573]: I0510 00:47:32.085807 1573 server.go:1269] "Started kubelet" May 10 00:47:32.086282 kubelet[1573]: I0510 00:47:32.086172 1573 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:47:32.086939 kubelet[1573]: I0510 00:47:32.086735 1573 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:47:32.087066 kubelet[1573]: I0510 00:47:32.087031 1573 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:47:32.089223 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 10 00:47:32.089365 kubelet[1573]: E0510 00:47:32.089269 1573 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:47:32.089365 kubelet[1573]: I0510 00:47:32.089338 1573 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:47:32.089912 kubelet[1573]: I0510 00:47:32.089500 1573 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:47:32.089912 kubelet[1573]: I0510 00:47:32.089767 1573 server.go:460] "Adding debug handlers to kubelet server" May 10 00:47:32.090876 kubelet[1573]: E0510 00:47:32.090827 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:47:32.090876 kubelet[1573]: I0510 00:47:32.090903 1573 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 00:47:32.091125 kubelet[1573]: I0510 00:47:32.091103 1573 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 00:47:32.091220 kubelet[1573]: I0510 00:47:32.091169 1573 reconciler.go:26] "Reconciler: start to sync state" May 10 00:47:32.091507 kubelet[1573]: W0510 00:47:32.091475 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 10 00:47:32.091586 kubelet[1573]: E0510 00:47:32.091497 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" May 10 00:47:32.091586 kubelet[1573]: E0510 00:47:32.091515 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 10 00:47:32.091739 kubelet[1573]: I0510 00:47:32.091677 1573 factory.go:221] Registration of the systemd container factory successfully May 10 00:47:32.091812 kubelet[1573]: I0510 00:47:32.091744 1573 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:47:32.092713 kubelet[1573]: I0510 00:47:32.092693 1573 factory.go:221] Registration of the containerd container factory successfully May 10 00:47:32.109571 kubelet[1573]: I0510 00:47:32.109516 1573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:47:32.110455 kubelet[1573]: I0510 00:47:32.110408 1573 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:47:32.110455 kubelet[1573]: I0510 00:47:32.110464 1573 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:47:32.110630 kubelet[1573]: I0510 00:47:32.110499 1573 kubelet.go:2321] "Starting kubelet main sync loop" May 10 00:47:32.110630 kubelet[1573]: E0510 00:47:32.110543 1573 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:47:32.117008 kubelet[1573]: W0510 00:47:32.116964 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 10 00:47:32.117008 kubelet[1573]: E0510 00:47:32.117012 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 10 00:47:32.123973 kubelet[1573]: I0510 00:47:32.123946 1573 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:47:32.123973 kubelet[1573]: I0510 00:47:32.123963 1573 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:47:32.123973 kubelet[1573]: I0510 00:47:32.123980 1573 state_mem.go:36] "Initialized new in-memory state store" May 10 00:47:32.192017 kubelet[1573]: E0510 00:47:32.191936 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:47:32.211424 kubelet[1573]: E0510 00:47:32.211336 1573 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 10 00:47:32.292168 kubelet[1573]: E0510 00:47:32.292118 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:47:32.292540 kubelet[1573]: E0510 00:47:32.292496 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" May 10 00:47:32.392688 kubelet[1573]: E0510 00:47:32.392602 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:47:32.411963 kubelet[1573]: E0510 00:47:32.411902 1573 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 10 00:47:32.493547 kubelet[1573]: E0510 00:47:32.493397 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:47:32.594092 kubelet[1573]: E0510 00:47:32.594021 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:47:32.694116 kubelet[1573]: E0510 00:47:32.694038 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" May 10 00:47:32.694487 kubelet[1573]: E0510 00:47:32.694128 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:47:32.794475 kubelet[1573]: E0510 00:47:32.794345 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:47:32.812619 kubelet[1573]: E0510 00:47:32.812545 1573 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 10 00:47:32.895143 kubelet[1573]: E0510 00:47:32.895038 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:47:32.995659 kubelet[1573]: E0510 00:47:32.995555 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:47:33.008282 kubelet[1573]: W0510 00:47:33.008218 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 10 00:47:33.008360 kubelet[1573]: E0510 00:47:33.008287 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 10 00:47:33.072625 kubelet[1573]: W0510 00:47:33.072458 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 10 00:47:33.072625 kubelet[1573]: E0510 00:47:33.072551 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 10 00:47:33.096676 kubelet[1573]: E0510 00:47:33.096624 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:47:33.197699 kubelet[1573]: E0510 00:47:33.197643 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:47:33.230452 kubelet[1573]: W0510 00:47:33.230389 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 10 00:47:33.230452 kubelet[1573]: E0510 00:47:33.230458 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 10 00:47:33.241585 kubelet[1573]: E0510 00:47:32.125451 1573 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183e03f406b4e8ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-10 00:47:32.085754028 +0000 UTC m=+0.519230593,LastTimestamp:2025-05-10 00:47:32.085754028 +0000 UTC m=+0.519230593,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 10 00:47:33.247366 kubelet[1573]: I0510 00:47:33.247260 1573 policy_none.go:49] "None policy: Start" May 10 00:47:33.248280 kubelet[1573]: I0510 00:47:33.248243 1573 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:47:33.248338 kubelet[1573]: I0510 00:47:33.248290 1573 state_mem.go:35] "Initializing new in-memory state store" May 10 00:47:33.266861 systemd[1]: Created slice kubepods.slice. May 10 00:47:33.270939 kubelet[1573]: W0510 00:47:33.270837 1573 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused May 10 00:47:33.270939 kubelet[1573]: E0510 00:47:33.270936 1573 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 10 00:47:33.270919 systemd[1]: Created slice kubepods-burstable.slice. May 10 00:47:33.273517 systemd[1]: Created slice kubepods-besteffort.slice. May 10 00:47:33.283749 kubelet[1573]: I0510 00:47:33.283719 1573 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:47:33.283984 kubelet[1573]: I0510 00:47:33.283951 1573 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:47:33.284058 kubelet[1573]: I0510 00:47:33.283972 1573 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:47:33.284586 kubelet[1573]: I0510 00:47:33.284228 1573 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:47:33.285160 kubelet[1573]: E0510 00:47:33.285128 1573 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 10 00:47:33.385998 kubelet[1573]: I0510 00:47:33.385841 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 10 00:47:33.386265 kubelet[1573]: E0510 00:47:33.386232 1573 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" May 10 00:47:33.495134 kubelet[1573]: E0510 00:47:33.495055 1573 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="1.6s" May 10 00:47:33.587735 kubelet[1573]: I0510 00:47:33.587703 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 10 00:47:33.588044 kubelet[1573]: E0510 00:47:33.588016 1573 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" May 10 00:47:33.621240 systemd[1]: Created slice kubepods-burstable-pod9c6e2c2ea750ae632b329496047815ed.slice. May 10 00:47:33.629068 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 10 00:47:33.632829 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 10 00:47:33.700519 kubelet[1573]: I0510 00:47:33.700479 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c6e2c2ea750ae632b329496047815ed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9c6e2c2ea750ae632b329496047815ed\") " pod="kube-system/kube-apiserver-localhost" May 10 00:47:33.700911 kubelet[1573]: I0510 00:47:33.700536 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:47:33.700911 kubelet[1573]: I0510 00:47:33.700555 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c6e2c2ea750ae632b329496047815ed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9c6e2c2ea750ae632b329496047815ed\") " pod="kube-system/kube-apiserver-localhost" May 10 00:47:33.700911 kubelet[1573]: I0510 00:47:33.700575 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c6e2c2ea750ae632b329496047815ed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9c6e2c2ea750ae632b329496047815ed\") " pod="kube-system/kube-apiserver-localhost" May 10 00:47:33.700911 kubelet[1573]: I0510 00:47:33.700595 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:47:33.700911 kubelet[1573]: I0510 00:47:33.700615 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:47:33.701043 kubelet[1573]: I0510 00:47:33.700634 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:47:33.701043 kubelet[1573]: I0510 00:47:33.700652 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:47:33.701043 kubelet[1573]: I0510 00:47:33.700665 1573 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 10 00:47:33.928036 kubelet[1573]: E0510 00:47:33.927967 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:33.928769 env[1215]: time="2025-05-10T00:47:33.928705268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9c6e2c2ea750ae632b329496047815ed,Namespace:kube-system,Attempt:0,}" May 10 00:47:33.930785 kubelet[1573]: E0510 00:47:33.930756 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:33.931214 env[1215]: time="2025-05-10T00:47:33.931169090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 10 00:47:33.935532 kubelet[1573]: E0510 00:47:33.935485 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:33.936043 env[1215]: time="2025-05-10T00:47:33.936007069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 10 00:47:33.973203 kubelet[1573]: E0510 00:47:33.973070 1573 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" May 10 00:47:33.989759 kubelet[1573]: I0510 00:47:33.989721 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 10 00:47:33.990271 kubelet[1573]: E0510 00:47:33.990210 1573 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" May 10 00:47:34.574029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4247311236.mount: Deactivated successfully. May 10 00:47:34.581966 env[1215]: time="2025-05-10T00:47:34.581900427Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.585008 env[1215]: time="2025-05-10T00:47:34.584952599Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.586006 env[1215]: time="2025-05-10T00:47:34.585953649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.588109 env[1215]: time="2025-05-10T00:47:34.588071270Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.590012 env[1215]: time="2025-05-10T00:47:34.589973046Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.591490 env[1215]: time="2025-05-10T00:47:34.591450188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.592950 env[1215]: time="2025-05-10T00:47:34.592920517Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.594824 env[1215]: time="2025-05-10T00:47:34.594784120Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.596999 env[1215]: time="2025-05-10T00:47:34.596967396Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.599068 env[1215]: time="2025-05-10T00:47:34.599044379Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.600423 env[1215]: time="2025-05-10T00:47:34.600372926Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.601000 env[1215]: time="2025-05-10T00:47:34.600975080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:34.629642 env[1215]: time="2025-05-10T00:47:34.625324478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:34.629642 env[1215]: time="2025-05-10T00:47:34.625366349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:34.629642 env[1215]: time="2025-05-10T00:47:34.625384353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:34.629642 env[1215]: time="2025-05-10T00:47:34.625513671Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b229a364d1d9dfc42ca809d84caa71ec21a6896cb898b1166d926db7a559b614 pid=1614 runtime=io.containerd.runc.v2 May 10 00:47:34.635830 env[1215]: time="2025-05-10T00:47:34.635706574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:34.635991 env[1215]: time="2025-05-10T00:47:34.635845020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:34.635991 env[1215]: time="2025-05-10T00:47:34.635902430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:34.636225 env[1215]: time="2025-05-10T00:47:34.636157278Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9b5ad75ba80d0242779f66e2eb0716eead3da7a80bcc83e80a9434c61e3a67d pid=1637 runtime=io.containerd.runc.v2 May 10 00:47:34.640738 systemd[1]: Started cri-containerd-b229a364d1d9dfc42ca809d84caa71ec21a6896cb898b1166d926db7a559b614.scope. May 10 00:47:34.647176 env[1215]: time="2025-05-10T00:47:34.647088949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:34.647417 env[1215]: time="2025-05-10T00:47:34.647388383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:34.647565 env[1215]: time="2025-05-10T00:47:34.647536927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:34.649413 systemd[1]: Started cri-containerd-d9b5ad75ba80d0242779f66e2eb0716eead3da7a80bcc83e80a9434c61e3a67d.scope. May 10 00:47:34.652429 env[1215]: time="2025-05-10T00:47:34.652352310Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/164d274d1d0bb5e2c55651842b2a54e391ff9685a97af0fd94e1cbadc19d6fbd pid=1666 runtime=io.containerd.runc.v2 May 10 00:47:34.665336 systemd[1]: Started cri-containerd-164d274d1d0bb5e2c55651842b2a54e391ff9685a97af0fd94e1cbadc19d6fbd.scope. May 10 00:47:34.688074 env[1215]: time="2025-05-10T00:47:34.687979101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b229a364d1d9dfc42ca809d84caa71ec21a6896cb898b1166d926db7a559b614\"" May 10 00:47:34.689215 kubelet[1573]: E0510 00:47:34.688957 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:34.690942 env[1215]: time="2025-05-10T00:47:34.690908357Z" level=info msg="CreateContainer within sandbox \"b229a364d1d9dfc42ca809d84caa71ec21a6896cb898b1166d926db7a559b614\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 10 00:47:34.696337 env[1215]: time="2025-05-10T00:47:34.696296919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9c6e2c2ea750ae632b329496047815ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9b5ad75ba80d0242779f66e2eb0716eead3da7a80bcc83e80a9434c61e3a67d\"" May 10 00:47:34.697009 kubelet[1573]: E0510 00:47:34.696977 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:34.699360 env[1215]: time="2025-05-10T00:47:34.699326158Z" level=info msg="CreateContainer within sandbox \"d9b5ad75ba80d0242779f66e2eb0716eead3da7a80bcc83e80a9434c61e3a67d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 10 00:47:34.707585 env[1215]: time="2025-05-10T00:47:34.707510740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"164d274d1d0bb5e2c55651842b2a54e391ff9685a97af0fd94e1cbadc19d6fbd\"" May 10 00:47:34.708238 kubelet[1573]: E0510 00:47:34.708214 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:34.709601 env[1215]: time="2025-05-10T00:47:34.709574689Z" level=info msg="CreateContainer within sandbox \"164d274d1d0bb5e2c55651842b2a54e391ff9685a97af0fd94e1cbadc19d6fbd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 10 00:47:34.718239 env[1215]: time="2025-05-10T00:47:34.718151053Z" level=info msg="CreateContainer within sandbox \"b229a364d1d9dfc42ca809d84caa71ec21a6896cb898b1166d926db7a559b614\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"67eb04301e65c9d3f7adaf3868c316e67cc87d0f4ac2e63fec23c68435eed8d0\"" May 10 00:47:34.719030 env[1215]: time="2025-05-10T00:47:34.718973720Z" level=info msg="StartContainer for \"67eb04301e65c9d3f7adaf3868c316e67cc87d0f4ac2e63fec23c68435eed8d0\"" May 10 00:47:34.733472 env[1215]: time="2025-05-10T00:47:34.733420740Z" level=info msg="CreateContainer within sandbox \"d9b5ad75ba80d0242779f66e2eb0716eead3da7a80bcc83e80a9434c61e3a67d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f34bb2987fa0e91f63ecf8792c4e385f23c27c76ea87f43ffa74e919d036a7d1\"" May 10 00:47:34.734300 env[1215]: time="2025-05-10T00:47:34.734284246Z" level=info msg="StartContainer for \"f34bb2987fa0e91f63ecf8792c4e385f23c27c76ea87f43ffa74e919d036a7d1\"" May 10 00:47:34.736611 systemd[1]: Started cri-containerd-67eb04301e65c9d3f7adaf3868c316e67cc87d0f4ac2e63fec23c68435eed8d0.scope. May 10 00:47:34.742954 env[1215]: time="2025-05-10T00:47:34.742906248Z" level=info msg="CreateContainer within sandbox \"164d274d1d0bb5e2c55651842b2a54e391ff9685a97af0fd94e1cbadc19d6fbd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ea03cc78b9eb4fee5dab379960b4aec8d7116f5e26d86972af2729ac2deb9b1a\"" May 10 00:47:34.743492 env[1215]: time="2025-05-10T00:47:34.743460119Z" level=info msg="StartContainer for \"ea03cc78b9eb4fee5dab379960b4aec8d7116f5e26d86972af2729ac2deb9b1a\"" May 10 00:47:34.755645 systemd[1]: Started cri-containerd-f34bb2987fa0e91f63ecf8792c4e385f23c27c76ea87f43ffa74e919d036a7d1.scope. May 10 00:47:34.765824 systemd[1]: Started cri-containerd-ea03cc78b9eb4fee5dab379960b4aec8d7116f5e26d86972af2729ac2deb9b1a.scope. May 10 00:47:34.790039 env[1215]: time="2025-05-10T00:47:34.789875947Z" level=info msg="StartContainer for \"67eb04301e65c9d3f7adaf3868c316e67cc87d0f4ac2e63fec23c68435eed8d0\" returns successfully" May 10 00:47:34.791749 kubelet[1573]: I0510 00:47:34.791443 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 10 00:47:34.791749 kubelet[1573]: E0510 00:47:34.791720 1573 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" May 10 00:47:34.813509 env[1215]: time="2025-05-10T00:47:34.813461039Z" level=info msg="StartContainer for \"f34bb2987fa0e91f63ecf8792c4e385f23c27c76ea87f43ffa74e919d036a7d1\" returns successfully" May 10 00:47:34.819062 env[1215]: time="2025-05-10T00:47:34.819023394Z" level=info msg="StartContainer for \"ea03cc78b9eb4fee5dab379960b4aec8d7116f5e26d86972af2729ac2deb9b1a\" returns successfully" May 10 00:47:35.129590 kubelet[1573]: E0510 00:47:35.129538 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:35.131109 kubelet[1573]: E0510 00:47:35.131086 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:35.132446 kubelet[1573]: E0510 00:47:35.132420 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:35.678214 kubelet[1573]: E0510 00:47:35.678175 1573 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 10 00:47:36.017394 kubelet[1573]: E0510 00:47:36.017343 1573 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 10 00:47:36.134450 kubelet[1573]: E0510 00:47:36.134407 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:36.380741 kubelet[1573]: E0510 00:47:36.380603 1573 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found May 10 00:47:36.392821 kubelet[1573]: I0510 00:47:36.392791 1573 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 10 00:47:36.400745 kubelet[1573]: I0510 00:47:36.400710 1573 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 10 00:47:36.400745 kubelet[1573]: E0510 00:47:36.400742 1573 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 10 00:47:36.410588 kubelet[1573]: E0510 00:47:36.410545 1573 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 10 00:47:37.018928 kubelet[1573]: I0510 00:47:37.018870 1573 apiserver.go:52] "Watching apiserver" May 10 00:47:37.091869 kubelet[1573]: I0510 00:47:37.091806 1573 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 00:47:37.436663 kubelet[1573]: E0510 00:47:37.436624 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:38.055723 systemd[1]: Reloading. May 10 00:47:38.126408 /usr/lib/systemd/system-generators/torcx-generator[1876]: time="2025-05-10T00:47:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 10 00:47:38.126450 /usr/lib/systemd/system-generators/torcx-generator[1876]: time="2025-05-10T00:47:38Z" level=info msg="torcx already run" May 10 00:47:38.137352 kubelet[1573]: E0510 00:47:38.137309 1573 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:38.357720 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 10 00:47:38.357740 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 10 00:47:38.377585 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 10 00:47:38.481774 systemd[1]: Stopping kubelet.service... May 10 00:47:38.507392 systemd[1]: kubelet.service: Deactivated successfully. May 10 00:47:38.507559 systemd[1]: Stopped kubelet.service. May 10 00:47:38.509216 systemd[1]: Starting kubelet.service... May 10 00:47:38.596169 systemd[1]: Started kubelet.service. May 10 00:47:38.656749 kubelet[1921]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:47:38.656749 kubelet[1921]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 10 00:47:38.656749 kubelet[1921]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 10 00:47:38.656749 kubelet[1921]: I0510 00:47:38.656208 1921 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 10 00:47:38.667276 kubelet[1921]: I0510 00:47:38.667211 1921 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 10 00:47:38.667276 kubelet[1921]: I0510 00:47:38.667245 1921 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 10 00:47:38.667547 kubelet[1921]: I0510 00:47:38.667525 1921 server.go:929] "Client rotation is on, will bootstrap in background" May 10 00:47:38.668848 kubelet[1921]: I0510 00:47:38.668817 1921 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 10 00:47:38.670552 kubelet[1921]: I0510 00:47:38.670518 1921 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 10 00:47:38.674452 kubelet[1921]: E0510 00:47:38.674405 1921 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 10 00:47:38.674452 kubelet[1921]: I0510 00:47:38.674438 1921 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 10 00:47:38.683384 kubelet[1921]: I0510 00:47:38.681382 1921 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 10 00:47:38.683384 kubelet[1921]: I0510 00:47:38.681630 1921 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 10 00:47:38.683384 kubelet[1921]: I0510 00:47:38.681758 1921 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 10 00:47:38.683384 kubelet[1921]: I0510 00:47:38.681786 1921 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 10 00:47:38.683679 kubelet[1921]: I0510 00:47:38.682032 1921 topology_manager.go:138] "Creating topology manager with none policy" May 10 00:47:38.683679 kubelet[1921]: I0510 00:47:38.682041 1921 container_manager_linux.go:300] "Creating device plugin manager" May 10 00:47:38.683679 kubelet[1921]: I0510 00:47:38.682089 1921 state_mem.go:36] "Initialized new in-memory state store" May 10 00:47:38.683679 kubelet[1921]: I0510 00:47:38.682191 1921 kubelet.go:408] "Attempting to sync node with API server" May 10 00:47:38.683679 kubelet[1921]: I0510 00:47:38.682218 1921 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 10 00:47:38.683679 kubelet[1921]: I0510 00:47:38.682244 1921 kubelet.go:314] "Adding apiserver pod source" May 10 00:47:38.683679 kubelet[1921]: I0510 00:47:38.682256 1921 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 10 00:47:38.684403 kubelet[1921]: I0510 00:47:38.684323 1921 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 10 00:47:38.685025 kubelet[1921]: I0510 00:47:38.685009 1921 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 10 00:47:38.685633 kubelet[1921]: I0510 00:47:38.685617 1921 server.go:1269] "Started kubelet" May 10 00:47:38.687081 kubelet[1921]: I0510 00:47:38.687068 1921 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 10 00:47:38.687239 kubelet[1921]: I0510 00:47:38.687155 1921 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 10 00:47:38.687558 kubelet[1921]: I0510 00:47:38.687534 1921 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 10 00:47:38.687618 kubelet[1921]: I0510 00:47:38.687592 1921 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 10 00:47:38.694642 kubelet[1921]: I0510 00:47:38.694616 1921 server.go:460] "Adding debug handlers to kubelet server" May 10 00:47:38.696120 kubelet[1921]: I0510 00:47:38.696073 1921 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 10 00:47:38.698948 kubelet[1921]: E0510 00:47:38.698924 1921 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 10 00:47:38.699169 kubelet[1921]: I0510 00:47:38.699122 1921 volume_manager.go:289] "Starting Kubelet Volume Manager" May 10 00:47:38.699300 kubelet[1921]: I0510 00:47:38.699226 1921 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 10 00:47:38.699840 kubelet[1921]: I0510 00:47:38.699821 1921 factory.go:221] Registration of the systemd container factory successfully May 10 00:47:38.699949 kubelet[1921]: I0510 00:47:38.699929 1921 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 10 00:47:38.700973 kubelet[1921]: I0510 00:47:38.700922 1921 reconciler.go:26] "Reconciler: start to sync state" May 10 00:47:38.701444 kubelet[1921]: I0510 00:47:38.701426 1921 factory.go:221] Registration of the containerd container factory successfully May 10 00:47:38.711978 kubelet[1921]: I0510 00:47:38.711944 1921 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 10 00:47:38.713557 kubelet[1921]: I0510 00:47:38.713545 1921 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 10 00:47:38.713666 kubelet[1921]: I0510 00:47:38.713654 1921 status_manager.go:217] "Starting to sync pod status with apiserver" May 10 00:47:38.713761 kubelet[1921]: I0510 00:47:38.713747 1921 kubelet.go:2321] "Starting kubelet main sync loop" May 10 00:47:38.713898 kubelet[1921]: E0510 00:47:38.713865 1921 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 10 00:47:38.731000 kubelet[1921]: I0510 00:47:38.730971 1921 cpu_manager.go:214] "Starting CPU manager" policy="none" May 10 00:47:38.731000 kubelet[1921]: I0510 00:47:38.730991 1921 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 10 00:47:38.731000 kubelet[1921]: I0510 00:47:38.731010 1921 state_mem.go:36] "Initialized new in-memory state store" May 10 00:47:38.731232 kubelet[1921]: I0510 00:47:38.731155 1921 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 10 00:47:38.731232 kubelet[1921]: I0510 00:47:38.731166 1921 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 10 00:47:38.731232 kubelet[1921]: I0510 00:47:38.731184 1921 policy_none.go:49] "None policy: Start" May 10 00:47:38.731850 kubelet[1921]: I0510 00:47:38.731836 1921 memory_manager.go:170] "Starting memorymanager" policy="None" May 10 00:47:38.731850 kubelet[1921]: I0510 00:47:38.731865 1921 state_mem.go:35] "Initializing new in-memory state store" May 10 00:47:38.732022 kubelet[1921]: I0510 00:47:38.732010 1921 state_mem.go:75] "Updated machine memory state" May 10 00:47:38.739145 kubelet[1921]: I0510 00:47:38.739112 1921 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 10 00:47:38.739311 kubelet[1921]: I0510 00:47:38.739292 1921 eviction_manager.go:189] "Eviction manager: starting control loop" May 10 00:47:38.739355 kubelet[1921]: I0510 00:47:38.739307 1921 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 10 00:47:38.739946 kubelet[1921]: I0510 00:47:38.739933 1921 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 10 00:47:38.848260 kubelet[1921]: I0510 00:47:38.848192 1921 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 10 00:47:38.902911 kubelet[1921]: I0510 00:47:38.902816 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 10 00:47:38.902911 kubelet[1921]: I0510 00:47:38.902861 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:47:38.903118 kubelet[1921]: I0510 00:47:38.902950 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:47:38.903118 kubelet[1921]: I0510 00:47:38.902999 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c6e2c2ea750ae632b329496047815ed-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9c6e2c2ea750ae632b329496047815ed\") " pod="kube-system/kube-apiserver-localhost" May 10 00:47:38.903118 kubelet[1921]: I0510 00:47:38.903046 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c6e2c2ea750ae632b329496047815ed-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9c6e2c2ea750ae632b329496047815ed\") " pod="kube-system/kube-apiserver-localhost" May 10 00:47:38.903118 kubelet[1921]: I0510 00:47:38.903078 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c6e2c2ea750ae632b329496047815ed-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9c6e2c2ea750ae632b329496047815ed\") " pod="kube-system/kube-apiserver-localhost" May 10 00:47:38.903118 kubelet[1921]: I0510 00:47:38.903099 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:47:38.903280 kubelet[1921]: I0510 00:47:38.903121 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:47:38.903280 kubelet[1921]: I0510 00:47:38.903146 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 10 00:47:39.204384 kubelet[1921]: E0510 00:47:39.204307 1921 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 10 00:47:39.204564 kubelet[1921]: E0510 00:47:39.204526 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:39.204775 kubelet[1921]: E0510 00:47:39.204748 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:39.205156 kubelet[1921]: I0510 00:47:39.204870 1921 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 10 00:47:39.205156 kubelet[1921]: I0510 00:47:39.204945 1921 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 10 00:47:39.230046 kubelet[1921]: E0510 00:47:39.229993 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:39.683998 kubelet[1921]: I0510 00:47:39.683961 1921 apiserver.go:52] "Watching apiserver" May 10 00:47:39.700331 kubelet[1921]: I0510 00:47:39.700297 1921 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 10 00:47:39.723902 kubelet[1921]: E0510 00:47:39.723851 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:39.723998 kubelet[1921]: E0510 00:47:39.723958 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:40.561022 kubelet[1921]: E0510 00:47:40.560920 1921 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 10 00:47:40.561316 kubelet[1921]: E0510 00:47:40.561283 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:40.621724 kubelet[1921]: I0510 00:47:40.621663 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.621627249 podStartE2EDuration="2.621627249s" podCreationTimestamp="2025-05-10 00:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:47:40.428244817 +0000 UTC m=+1.828561993" watchObservedRunningTime="2025-05-10 00:47:40.621627249 +0000 UTC m=+2.021944425" May 10 00:47:40.725573 kubelet[1921]: E0510 00:47:40.725542 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:40.837381 kubelet[1921]: I0510 00:47:40.837197 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.837169893 podStartE2EDuration="3.837169893s" podCreationTimestamp="2025-05-10 00:47:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:47:40.62216356 +0000 UTC m=+2.022480756" watchObservedRunningTime="2025-05-10 00:47:40.837169893 +0000 UTC m=+2.237487079" May 10 00:47:40.851809 sudo[1957]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 10 00:47:40.852044 sudo[1957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 10 00:47:40.933416 kubelet[1921]: I0510 00:47:40.933342 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.9333126099999998 podStartE2EDuration="2.93331261s" podCreationTimestamp="2025-05-10 00:47:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:47:40.838541564 +0000 UTC m=+2.238858740" watchObservedRunningTime="2025-05-10 00:47:40.93331261 +0000 UTC m=+2.333629776" May 10 00:47:41.336932 sudo[1957]: pam_unix(sudo:session): session closed for user root May 10 00:47:42.990505 kubelet[1921]: E0510 00:47:42.990443 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:43.268303 sudo[1313]: pam_unix(sudo:session): session closed for user root May 10 00:47:43.269908 sshd[1310]: pam_unix(sshd:session): session closed for user core May 10 00:47:43.273026 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:35680.service: Deactivated successfully. May 10 00:47:43.273812 systemd[1]: session-5.scope: Deactivated successfully. May 10 00:47:43.274044 systemd[1]: session-5.scope: Consumed 5.311s CPU time. May 10 00:47:43.274518 systemd-logind[1204]: Session 5 logged out. Waiting for processes to exit. May 10 00:47:43.275371 systemd-logind[1204]: Removed session 5. May 10 00:47:44.168009 kubelet[1921]: E0510 00:47:44.167971 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:44.646503 update_engine[1208]: I0510 00:47:44.646449 1208 update_attempter.cc:509] Updating boot flags... May 10 00:47:44.732559 kubelet[1921]: E0510 00:47:44.732517 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:45.674360 kubelet[1921]: I0510 00:47:45.674319 1921 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 10 00:47:45.674727 env[1215]: time="2025-05-10T00:47:45.674694509Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 10 00:47:45.674972 kubelet[1921]: I0510 00:47:45.674949 1921 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 10 00:47:46.940252 systemd[1]: Created slice kubepods-besteffort-podd170a372_209b_4e79_90ab_df5c2f0618e7.slice. May 10 00:47:46.958205 kubelet[1921]: I0510 00:47:46.958158 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d170a372-209b-4e79-90ab-df5c2f0618e7-cilium-config-path\") pod \"cilium-operator-5d85765b45-28g9n\" (UID: \"d170a372-209b-4e79-90ab-df5c2f0618e7\") " pod="kube-system/cilium-operator-5d85765b45-28g9n" May 10 00:47:46.958205 kubelet[1921]: I0510 00:47:46.958199 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7scp4\" (UniqueName: \"kubernetes.io/projected/d170a372-209b-4e79-90ab-df5c2f0618e7-kube-api-access-7scp4\") pod \"cilium-operator-5d85765b45-28g9n\" (UID: \"d170a372-209b-4e79-90ab-df5c2f0618e7\") " pod="kube-system/cilium-operator-5d85765b45-28g9n" May 10 00:47:47.008817 kubelet[1921]: W0510 00:47:47.008315 1921 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 10 00:47:47.008817 kubelet[1921]: E0510 00:47:47.008372 1921 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 10 00:47:47.008705 systemd[1]: Created slice kubepods-besteffort-pod888ddef3_6414_4333_a195_7c413d0c1fd4.slice. May 10 00:47:47.059041 kubelet[1921]: I0510 00:47:47.058981 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/888ddef3-6414-4333-a195-7c413d0c1fd4-lib-modules\") pod \"kube-proxy-k6d6l\" (UID: \"888ddef3-6414-4333-a195-7c413d0c1fd4\") " pod="kube-system/kube-proxy-k6d6l" May 10 00:47:47.059041 kubelet[1921]: I0510 00:47:47.059039 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/888ddef3-6414-4333-a195-7c413d0c1fd4-kube-proxy\") pod \"kube-proxy-k6d6l\" (UID: \"888ddef3-6414-4333-a195-7c413d0c1fd4\") " pod="kube-system/kube-proxy-k6d6l" May 10 00:47:47.059260 kubelet[1921]: I0510 00:47:47.059066 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8btc\" (UniqueName: \"kubernetes.io/projected/888ddef3-6414-4333-a195-7c413d0c1fd4-kube-api-access-v8btc\") pod \"kube-proxy-k6d6l\" (UID: \"888ddef3-6414-4333-a195-7c413d0c1fd4\") " pod="kube-system/kube-proxy-k6d6l" May 10 00:47:47.059322 kubelet[1921]: I0510 00:47:47.059292 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/888ddef3-6414-4333-a195-7c413d0c1fd4-xtables-lock\") pod \"kube-proxy-k6d6l\" (UID: \"888ddef3-6414-4333-a195-7c413d0c1fd4\") " pod="kube-system/kube-proxy-k6d6l" May 10 00:47:47.285681 systemd[1]: Created slice kubepods-burstable-poda53273fb_c04c_4bcc_b838_3363ef018074.slice. May 10 00:47:47.361795 kubelet[1921]: I0510 00:47:47.361745 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a53273fb-c04c-4bcc-b838-3363ef018074-cilium-config-path\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.361795 kubelet[1921]: I0510 00:47:47.361789 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-host-proc-sys-kernel\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.361795 kubelet[1921]: I0510 00:47:47.361807 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a53273fb-c04c-4bcc-b838-3363ef018074-hubble-tls\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.362068 kubelet[1921]: I0510 00:47:47.361824 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-bpf-maps\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.362068 kubelet[1921]: I0510 00:47:47.361838 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-cilium-cgroup\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.362068 kubelet[1921]: I0510 00:47:47.361952 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-host-proc-sys-net\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.362068 kubelet[1921]: I0510 00:47:47.361989 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-hostproc\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.362068 kubelet[1921]: I0510 00:47:47.362018 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-cni-path\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.362194 kubelet[1921]: I0510 00:47:47.362073 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-cilium-run\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.362194 kubelet[1921]: I0510 00:47:47.362092 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-lib-modules\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.362194 kubelet[1921]: I0510 00:47:47.362105 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-xtables-lock\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.362194 kubelet[1921]: I0510 00:47:47.362118 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg9jx\" (UniqueName: \"kubernetes.io/projected/a53273fb-c04c-4bcc-b838-3363ef018074-kube-api-access-pg9jx\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.362194 kubelet[1921]: I0510 00:47:47.362142 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a53273fb-c04c-4bcc-b838-3363ef018074-clustermesh-secrets\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.362194 kubelet[1921]: I0510 00:47:47.362165 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-etc-cni-netd\") pod \"cilium-27qpz\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " pod="kube-system/cilium-27qpz" May 10 00:47:47.463731 kubelet[1921]: I0510 00:47:47.463666 1921 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 10 00:47:47.552530 kubelet[1921]: E0510 00:47:47.552373 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:47.553174 env[1215]: time="2025-05-10T00:47:47.553094410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-28g9n,Uid:d170a372-209b-4e79-90ab-df5c2f0618e7,Namespace:kube-system,Attempt:0,}" May 10 00:47:47.588551 kubelet[1921]: E0510 00:47:47.588497 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:47.589201 env[1215]: time="2025-05-10T00:47:47.589149639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-27qpz,Uid:a53273fb-c04c-4bcc-b838-3363ef018074,Namespace:kube-system,Attempt:0,}" May 10 00:47:47.592005 env[1215]: time="2025-05-10T00:47:47.591917010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:47.592072 env[1215]: time="2025-05-10T00:47:47.592014433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:47.592072 env[1215]: time="2025-05-10T00:47:47.592037227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:47.592370 env[1215]: time="2025-05-10T00:47:47.592281078Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a09b8b052bce70aff98aa828b501bf255672f65c4d6002c49f17de70c40b91da pid=2031 runtime=io.containerd.runc.v2 May 10 00:47:47.612315 systemd[1]: Started cri-containerd-a09b8b052bce70aff98aa828b501bf255672f65c4d6002c49f17de70c40b91da.scope. May 10 00:47:47.617723 env[1215]: time="2025-05-10T00:47:47.617622398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:47.618171 env[1215]: time="2025-05-10T00:47:47.617686911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:47.618171 env[1215]: time="2025-05-10T00:47:47.617697701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:47.618281 env[1215]: time="2025-05-10T00:47:47.618163162Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda pid=2059 runtime=io.containerd.runc.v2 May 10 00:47:47.629484 systemd[1]: Started cri-containerd-faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda.scope. May 10 00:47:47.654981 env[1215]: time="2025-05-10T00:47:47.654912235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-28g9n,Uid:d170a372-209b-4e79-90ab-df5c2f0618e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"a09b8b052bce70aff98aa828b501bf255672f65c4d6002c49f17de70c40b91da\"" May 10 00:47:47.656927 kubelet[1921]: E0510 00:47:47.656147 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:47.658736 env[1215]: time="2025-05-10T00:47:47.658692263Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 10 00:47:47.661019 env[1215]: time="2025-05-10T00:47:47.660948435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-27qpz,Uid:a53273fb-c04c-4bcc-b838-3363ef018074,Namespace:kube-system,Attempt:0,} returns sandbox id \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\"" May 10 00:47:47.661710 kubelet[1921]: E0510 00:47:47.661682 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:48.212352 kubelet[1921]: E0510 00:47:48.212283 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:48.213127 env[1215]: time="2025-05-10T00:47:48.213058832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k6d6l,Uid:888ddef3-6414-4333-a195-7c413d0c1fd4,Namespace:kube-system,Attempt:0,}" May 10 00:47:48.282236 env[1215]: time="2025-05-10T00:47:48.282139473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:47:48.282236 env[1215]: time="2025-05-10T00:47:48.282181232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:47:48.282236 env[1215]: time="2025-05-10T00:47:48.282190970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:47:48.282573 env[1215]: time="2025-05-10T00:47:48.282376161Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/167a70df48744132e606687d95224ab9ac7655b56521cbf3a7e084c2c93437be pid=2111 runtime=io.containerd.runc.v2 May 10 00:47:48.296432 systemd[1]: Started cri-containerd-167a70df48744132e606687d95224ab9ac7655b56521cbf3a7e084c2c93437be.scope. May 10 00:47:48.318762 env[1215]: time="2025-05-10T00:47:48.318708514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k6d6l,Uid:888ddef3-6414-4333-a195-7c413d0c1fd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"167a70df48744132e606687d95224ab9ac7655b56521cbf3a7e084c2c93437be\"" May 10 00:47:48.320244 kubelet[1921]: E0510 00:47:48.319654 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:48.322033 env[1215]: time="2025-05-10T00:47:48.321986578Z" level=info msg="CreateContainer within sandbox \"167a70df48744132e606687d95224ab9ac7655b56521cbf3a7e084c2c93437be\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 10 00:47:48.344047 env[1215]: time="2025-05-10T00:47:48.343980795Z" level=info msg="CreateContainer within sandbox \"167a70df48744132e606687d95224ab9ac7655b56521cbf3a7e084c2c93437be\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17ac1976aef98880957444bf8be5601102d5670777bd7446897dbdf8c668444a\"" May 10 00:47:48.344577 env[1215]: time="2025-05-10T00:47:48.344555341Z" level=info msg="StartContainer for \"17ac1976aef98880957444bf8be5601102d5670777bd7446897dbdf8c668444a\"" May 10 00:47:48.362290 systemd[1]: Started cri-containerd-17ac1976aef98880957444bf8be5601102d5670777bd7446897dbdf8c668444a.scope. May 10 00:47:48.395417 env[1215]: time="2025-05-10T00:47:48.395327763Z" level=info msg="StartContainer for \"17ac1976aef98880957444bf8be5601102d5670777bd7446897dbdf8c668444a\" returns successfully" May 10 00:47:48.740248 kubelet[1921]: E0510 00:47:48.740217 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:48.749506 kubelet[1921]: I0510 00:47:48.749436 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k6d6l" podStartSLOduration=2.749418365 podStartE2EDuration="2.749418365s" podCreationTimestamp="2025-05-10 00:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:47:48.74901964 +0000 UTC m=+10.149336816" watchObservedRunningTime="2025-05-10 00:47:48.749418365 +0000 UTC m=+10.149735541" May 10 00:47:48.985371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466841589.mount: Deactivated successfully. May 10 00:47:50.734145 kubelet[1921]: E0510 00:47:50.734082 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:50.925242 env[1215]: time="2025-05-10T00:47:50.925158084Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:50.996395 env[1215]: time="2025-05-10T00:47:50.996223777Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:50.999700 env[1215]: time="2025-05-10T00:47:50.999639575Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:47:51.000443 env[1215]: time="2025-05-10T00:47:51.000405813Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" May 10 00:47:51.001854 env[1215]: time="2025-05-10T00:47:51.001550076Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 10 00:47:51.002651 env[1215]: time="2025-05-10T00:47:51.002615982Z" level=info msg="CreateContainer within sandbox \"a09b8b052bce70aff98aa828b501bf255672f65c4d6002c49f17de70c40b91da\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 10 00:47:51.020550 env[1215]: time="2025-05-10T00:47:51.020477874Z" level=info msg="CreateContainer within sandbox \"a09b8b052bce70aff98aa828b501bf255672f65c4d6002c49f17de70c40b91da\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\"" May 10 00:47:51.021249 env[1215]: time="2025-05-10T00:47:51.021195049Z" level=info msg="StartContainer for \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\"" May 10 00:47:51.038141 systemd[1]: Started cri-containerd-b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2.scope. May 10 00:47:51.073264 env[1215]: time="2025-05-10T00:47:51.073173669Z" level=info msg="StartContainer for \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\" returns successfully" May 10 00:47:51.750908 kubelet[1921]: E0510 00:47:51.750843 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:52.604145 kubelet[1921]: I0510 00:47:52.604072 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-28g9n" podStartSLOduration=3.260821412 podStartE2EDuration="6.604041302s" podCreationTimestamp="2025-05-10 00:47:46 +0000 UTC" firstStartedPulling="2025-05-10 00:47:47.658116523 +0000 UTC m=+9.058433709" lastFinishedPulling="2025-05-10 00:47:51.001336413 +0000 UTC m=+12.401653599" observedRunningTime="2025-05-10 00:47:52.603616519 +0000 UTC m=+14.003933705" watchObservedRunningTime="2025-05-10 00:47:52.604041302 +0000 UTC m=+14.004358478" May 10 00:47:52.752601 kubelet[1921]: E0510 00:47:52.752562 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:52.994913 kubelet[1921]: E0510 00:47:52.994841 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:47:59.258827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4000407225.mount: Deactivated successfully. May 10 00:48:03.503260 env[1215]: time="2025-05-10T00:48:03.503194332Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:03.505128 env[1215]: time="2025-05-10T00:48:03.505060044Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:03.506632 env[1215]: time="2025-05-10T00:48:03.506608327Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 10 00:48:03.507153 env[1215]: time="2025-05-10T00:48:03.507124559Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" May 10 00:48:03.509240 env[1215]: time="2025-05-10T00:48:03.509193683Z" level=info msg="CreateContainer within sandbox \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:48:03.521278 env[1215]: time="2025-05-10T00:48:03.521205270Z" level=info msg="CreateContainer within sandbox \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b\"" May 10 00:48:03.525307 env[1215]: time="2025-05-10T00:48:03.525260994Z" level=info msg="StartContainer for \"8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b\"" May 10 00:48:03.544053 systemd[1]: Started cri-containerd-8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b.scope. May 10 00:48:03.582790 systemd[1]: cri-containerd-8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b.scope: Deactivated successfully. May 10 00:48:03.833013 env[1215]: time="2025-05-10T00:48:03.832260195Z" level=info msg="StartContainer for \"8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b\" returns successfully" May 10 00:48:03.904962 env[1215]: time="2025-05-10T00:48:03.904907356Z" level=info msg="shim disconnected" id=8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b May 10 00:48:03.904962 env[1215]: time="2025-05-10T00:48:03.904949877Z" level=warning msg="cleaning up after shim disconnected" id=8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b namespace=k8s.io May 10 00:48:03.904962 env[1215]: time="2025-05-10T00:48:03.904958453Z" level=info msg="cleaning up dead shim" May 10 00:48:03.911997 env[1215]: time="2025-05-10T00:48:03.911945604Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2400 runtime=io.containerd.runc.v2\n" May 10 00:48:04.516923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b-rootfs.mount: Deactivated successfully. May 10 00:48:04.838424 kubelet[1921]: E0510 00:48:04.838154 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:04.840546 env[1215]: time="2025-05-10T00:48:04.840509315Z" level=info msg="CreateContainer within sandbox \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:48:05.338955 env[1215]: time="2025-05-10T00:48:05.338854537Z" level=info msg="CreateContainer within sandbox \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2\"" May 10 00:48:05.339359 env[1215]: time="2025-05-10T00:48:05.339325413Z" level=info msg="StartContainer for \"7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2\"" May 10 00:48:05.356406 systemd[1]: Started cri-containerd-7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2.scope. May 10 00:48:05.458908 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 10 00:48:05.459213 systemd[1]: Stopped systemd-sysctl.service. May 10 00:48:05.459415 systemd[1]: Stopping systemd-sysctl.service... May 10 00:48:05.460717 systemd[1]: Starting systemd-sysctl.service... May 10 00:48:05.462206 systemd[1]: cri-containerd-7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2.scope: Deactivated successfully. May 10 00:48:05.472725 systemd[1]: Finished systemd-sysctl.service. May 10 00:48:05.483440 env[1215]: time="2025-05-10T00:48:05.482676396Z" level=info msg="StartContainer for \"7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2\" returns successfully" May 10 00:48:05.501726 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:54210.service. May 10 00:48:05.515617 env[1215]: time="2025-05-10T00:48:05.515562317Z" level=info msg="shim disconnected" id=7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2 May 10 00:48:05.515617 env[1215]: time="2025-05-10T00:48:05.515609064Z" level=warning msg="cleaning up after shim disconnected" id=7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2 namespace=k8s.io May 10 00:48:05.515617 env[1215]: time="2025-05-10T00:48:05.515617119Z" level=info msg="cleaning up dead shim" May 10 00:48:05.518557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2-rootfs.mount: Deactivated successfully. May 10 00:48:05.524492 env[1215]: time="2025-05-10T00:48:05.524457000Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2465 runtime=io.containerd.runc.v2\n" May 10 00:48:05.541233 sshd[2462]: Accepted publickey for core from 10.0.0.1 port 54210 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:05.542514 sshd[2462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:05.546494 systemd-logind[1204]: New session 6 of user core. May 10 00:48:05.547435 systemd[1]: Started session-6.scope. May 10 00:48:05.681586 sshd[2462]: pam_unix(sshd:session): session closed for user core May 10 00:48:05.684066 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:54210.service: Deactivated successfully. May 10 00:48:05.685084 systemd[1]: session-6.scope: Deactivated successfully. May 10 00:48:05.685746 systemd-logind[1204]: Session 6 logged out. Waiting for processes to exit. May 10 00:48:05.686569 systemd-logind[1204]: Removed session 6. May 10 00:48:05.841849 kubelet[1921]: E0510 00:48:05.841812 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:05.845169 env[1215]: time="2025-05-10T00:48:05.845119720Z" level=info msg="CreateContainer within sandbox \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:48:05.862395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3686163449.mount: Deactivated successfully. May 10 00:48:05.868527 env[1215]: time="2025-05-10T00:48:05.868477283Z" level=info msg="CreateContainer within sandbox \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8\"" May 10 00:48:05.868979 env[1215]: time="2025-05-10T00:48:05.868956825Z" level=info msg="StartContainer for \"6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8\"" May 10 00:48:05.884182 systemd[1]: Started cri-containerd-6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8.scope. May 10 00:48:05.908781 env[1215]: time="2025-05-10T00:48:05.908726756Z" level=info msg="StartContainer for \"6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8\" returns successfully" May 10 00:48:05.909604 systemd[1]: cri-containerd-6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8.scope: Deactivated successfully. May 10 00:48:05.935438 env[1215]: time="2025-05-10T00:48:05.935303697Z" level=info msg="shim disconnected" id=6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8 May 10 00:48:05.935438 env[1215]: time="2025-05-10T00:48:05.935357748Z" level=warning msg="cleaning up after shim disconnected" id=6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8 namespace=k8s.io May 10 00:48:05.935438 env[1215]: time="2025-05-10T00:48:05.935372817Z" level=info msg="cleaning up dead shim" May 10 00:48:05.942933 env[1215]: time="2025-05-10T00:48:05.942862398Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2533 runtime=io.containerd.runc.v2\n" May 10 00:48:06.518763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8-rootfs.mount: Deactivated successfully. May 10 00:48:06.844926 kubelet[1921]: E0510 00:48:06.844647 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:06.847119 env[1215]: time="2025-05-10T00:48:06.846949403Z" level=info msg="CreateContainer within sandbox \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:48:06.864965 env[1215]: time="2025-05-10T00:48:06.864900909Z" level=info msg="CreateContainer within sandbox \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82\"" May 10 00:48:06.865328 env[1215]: time="2025-05-10T00:48:06.865307675Z" level=info msg="StartContainer for \"876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82\"" May 10 00:48:06.885455 systemd[1]: Started cri-containerd-876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82.scope. May 10 00:48:06.911621 systemd[1]: cri-containerd-876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82.scope: Deactivated successfully. May 10 00:48:06.913950 env[1215]: time="2025-05-10T00:48:06.913909485Z" level=info msg="StartContainer for \"876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82\" returns successfully" May 10 00:48:06.933298 env[1215]: time="2025-05-10T00:48:06.933244614Z" level=info msg="shim disconnected" id=876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82 May 10 00:48:06.933298 env[1215]: time="2025-05-10T00:48:06.933294918Z" level=warning msg="cleaning up after shim disconnected" id=876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82 namespace=k8s.io May 10 00:48:06.933298 env[1215]: time="2025-05-10T00:48:06.933304417Z" level=info msg="cleaning up dead shim" May 10 00:48:06.939838 env[1215]: time="2025-05-10T00:48:06.939759439Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:48:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2587 runtime=io.containerd.runc.v2\n" May 10 00:48:07.518865 systemd[1]: run-containerd-runc-k8s.io-876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82-runc.MrwTNk.mount: Deactivated successfully. May 10 00:48:07.518985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82-rootfs.mount: Deactivated successfully. May 10 00:48:07.848720 kubelet[1921]: E0510 00:48:07.848564 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:07.850916 env[1215]: time="2025-05-10T00:48:07.850334197Z" level=info msg="CreateContainer within sandbox \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:48:07.869377 env[1215]: time="2025-05-10T00:48:07.869313230Z" level=info msg="CreateContainer within sandbox \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\"" May 10 00:48:07.870001 env[1215]: time="2025-05-10T00:48:07.869902579Z" level=info msg="StartContainer for \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\"" May 10 00:48:07.885499 systemd[1]: Started cri-containerd-1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9.scope. May 10 00:48:07.913846 env[1215]: time="2025-05-10T00:48:07.913796819Z" level=info msg="StartContainer for \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\" returns successfully" May 10 00:48:08.044933 kubelet[1921]: I0510 00:48:08.044877 1921 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 10 00:48:08.079124 systemd[1]: Created slice kubepods-burstable-pod8bde6a33_4b73_4ccd_93a3_d7100a255aaa.slice. May 10 00:48:08.084756 systemd[1]: Created slice kubepods-burstable-pod42e3daa2_dd4f_4550_b4a7_6806fa47d0ed.slice. May 10 00:48:08.212172 kubelet[1921]: I0510 00:48:08.212108 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5tx7\" (UniqueName: \"kubernetes.io/projected/42e3daa2-dd4f-4550-b4a7-6806fa47d0ed-kube-api-access-n5tx7\") pod \"coredns-6f6b679f8f-nj9h2\" (UID: \"42e3daa2-dd4f-4550-b4a7-6806fa47d0ed\") " pod="kube-system/coredns-6f6b679f8f-nj9h2" May 10 00:48:08.212172 kubelet[1921]: I0510 00:48:08.212153 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8bde6a33-4b73-4ccd-93a3-d7100a255aaa-config-volume\") pod \"coredns-6f6b679f8f-xfv82\" (UID: \"8bde6a33-4b73-4ccd-93a3-d7100a255aaa\") " pod="kube-system/coredns-6f6b679f8f-xfv82" May 10 00:48:08.212172 kubelet[1921]: I0510 00:48:08.212172 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42e3daa2-dd4f-4550-b4a7-6806fa47d0ed-config-volume\") pod \"coredns-6f6b679f8f-nj9h2\" (UID: \"42e3daa2-dd4f-4550-b4a7-6806fa47d0ed\") " pod="kube-system/coredns-6f6b679f8f-nj9h2" May 10 00:48:08.212363 kubelet[1921]: I0510 00:48:08.212191 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crqsv\" (UniqueName: \"kubernetes.io/projected/8bde6a33-4b73-4ccd-93a3-d7100a255aaa-kube-api-access-crqsv\") pod \"coredns-6f6b679f8f-xfv82\" (UID: \"8bde6a33-4b73-4ccd-93a3-d7100a255aaa\") " pod="kube-system/coredns-6f6b679f8f-xfv82" May 10 00:48:08.388941 kubelet[1921]: E0510 00:48:08.388910 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:08.389201 kubelet[1921]: E0510 00:48:08.388993 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:08.389828 env[1215]: time="2025-05-10T00:48:08.389795774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nj9h2,Uid:42e3daa2-dd4f-4550-b4a7-6806fa47d0ed,Namespace:kube-system,Attempt:0,}" May 10 00:48:08.389934 env[1215]: time="2025-05-10T00:48:08.389796486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfv82,Uid:8bde6a33-4b73-4ccd-93a3-d7100a255aaa,Namespace:kube-system,Attempt:0,}" May 10 00:48:08.852784 kubelet[1921]: E0510 00:48:08.852753 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:09.848741 systemd-networkd[1037]: cilium_host: Link UP May 10 00:48:09.848944 systemd-networkd[1037]: cilium_net: Link UP May 10 00:48:09.848948 systemd-networkd[1037]: cilium_net: Gained carrier May 10 00:48:09.849181 systemd-networkd[1037]: cilium_host: Gained carrier May 10 00:48:09.852022 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 10 00:48:09.851591 systemd-networkd[1037]: cilium_host: Gained IPv6LL May 10 00:48:09.860850 kubelet[1921]: E0510 00:48:09.860783 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:09.936473 systemd-networkd[1037]: cilium_vxlan: Link UP May 10 00:48:09.936482 systemd-networkd[1037]: cilium_vxlan: Gained carrier May 10 00:48:10.136920 kernel: NET: Registered PF_ALG protocol family May 10 00:48:10.686157 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:35750.service. May 10 00:48:10.725018 systemd-networkd[1037]: cilium_net: Gained IPv6LL May 10 00:48:10.730235 sshd[3107]: Accepted publickey for core from 10.0.0.1 port 35750 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:10.731747 sshd[3107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:10.736217 systemd[1]: Started session-7.scope. May 10 00:48:10.736655 systemd-logind[1204]: New session 7 of user core. May 10 00:48:10.751200 systemd-networkd[1037]: lxc_health: Link UP May 10 00:48:10.756931 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:48:10.757040 systemd-networkd[1037]: lxc_health: Gained carrier May 10 00:48:10.862589 kubelet[1921]: E0510 00:48:10.862358 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:10.876723 sshd[3107]: pam_unix(sshd:session): session closed for user core May 10 00:48:10.879359 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:35750.service: Deactivated successfully. May 10 00:48:10.880053 systemd[1]: session-7.scope: Deactivated successfully. May 10 00:48:10.880681 systemd-logind[1204]: Session 7 logged out. Waiting for processes to exit. May 10 00:48:10.881502 systemd-logind[1204]: Removed session 7. May 10 00:48:10.948188 systemd-networkd[1037]: lxc7e61a088c536: Link UP May 10 00:48:10.963946 kernel: eth0: renamed from tmp6da70 May 10 00:48:10.969105 kernel: eth0: renamed from tmpad0b0 May 10 00:48:10.974383 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 10 00:48:10.974428 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7e61a088c536: link becomes ready May 10 00:48:10.972088 systemd-networkd[1037]: lxc9816b3c71c07: Link UP May 10 00:48:10.975181 systemd-networkd[1037]: lxc7e61a088c536: Gained carrier May 10 00:48:10.990189 systemd-networkd[1037]: lxc9816b3c71c07: Gained carrier May 10 00:48:10.990913 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9816b3c71c07: link becomes ready May 10 00:48:11.295034 systemd-networkd[1037]: cilium_vxlan: Gained IPv6LL May 10 00:48:11.808945 kubelet[1921]: I0510 00:48:11.808846 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-27qpz" podStartSLOduration=9.963865942 podStartE2EDuration="25.808824209s" podCreationTimestamp="2025-05-10 00:47:46 +0000 UTC" firstStartedPulling="2025-05-10 00:47:47.663001854 +0000 UTC m=+9.063319030" lastFinishedPulling="2025-05-10 00:48:03.507960121 +0000 UTC m=+24.908277297" observedRunningTime="2025-05-10 00:48:08.926495452 +0000 UTC m=+30.326812638" watchObservedRunningTime="2025-05-10 00:48:11.808824209 +0000 UTC m=+33.209141385" May 10 00:48:11.864398 kubelet[1921]: E0510 00:48:11.864343 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:12.319100 systemd-networkd[1037]: lxc_health: Gained IPv6LL May 10 00:48:12.575076 systemd-networkd[1037]: lxc7e61a088c536: Gained IPv6LL May 10 00:48:12.575381 systemd-networkd[1037]: lxc9816b3c71c07: Gained IPv6LL May 10 00:48:12.866716 kubelet[1921]: E0510 00:48:12.866601 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:13.868384 kubelet[1921]: E0510 00:48:13.868336 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:14.386968 env[1215]: time="2025-05-10T00:48:14.386862635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:48:14.386968 env[1215]: time="2025-05-10T00:48:14.386933407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:48:14.386968 env[1215]: time="2025-05-10T00:48:14.386945129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:48:14.388248 env[1215]: time="2025-05-10T00:48:14.388203604Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad0b0c64d9fca72dba5e101f861fd9ed6af234affcc2b94bf3f22b4cf0ede4ae pid=3182 runtime=io.containerd.runc.v2 May 10 00:48:14.396107 env[1215]: time="2025-05-10T00:48:14.396017756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:48:14.396107 env[1215]: time="2025-05-10T00:48:14.396105791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:48:14.396268 env[1215]: time="2025-05-10T00:48:14.396142781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:48:14.396383 env[1215]: time="2025-05-10T00:48:14.396331776Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6da701fdbd9801e5ae7185b3ea9647a5b45986b0e988d5a9cf94cf5ed7704078 pid=3199 runtime=io.containerd.runc.v2 May 10 00:48:14.408917 systemd[1]: Started cri-containerd-ad0b0c64d9fca72dba5e101f861fd9ed6af234affcc2b94bf3f22b4cf0ede4ae.scope. May 10 00:48:14.412751 systemd[1]: Started cri-containerd-6da701fdbd9801e5ae7185b3ea9647a5b45986b0e988d5a9cf94cf5ed7704078.scope. May 10 00:48:14.421227 systemd-resolved[1162]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 00:48:14.423959 systemd-resolved[1162]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 10 00:48:14.447663 env[1215]: time="2025-05-10T00:48:14.446618528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-nj9h2,Uid:42e3daa2-dd4f-4550-b4a7-6806fa47d0ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"6da701fdbd9801e5ae7185b3ea9647a5b45986b0e988d5a9cf94cf5ed7704078\"" May 10 00:48:14.447663 env[1215]: time="2025-05-10T00:48:14.447181175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xfv82,Uid:8bde6a33-4b73-4ccd-93a3-d7100a255aaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad0b0c64d9fca72dba5e101f861fd9ed6af234affcc2b94bf3f22b4cf0ede4ae\"" May 10 00:48:14.447843 kubelet[1921]: E0510 00:48:14.447681 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:14.449250 kubelet[1921]: E0510 00:48:14.449225 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:14.450101 env[1215]: time="2025-05-10T00:48:14.450069492Z" level=info msg="CreateContainer within sandbox \"6da701fdbd9801e5ae7185b3ea9647a5b45986b0e988d5a9cf94cf5ed7704078\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:48:14.451333 env[1215]: time="2025-05-10T00:48:14.451310062Z" level=info msg="CreateContainer within sandbox \"ad0b0c64d9fca72dba5e101f861fd9ed6af234affcc2b94bf3f22b4cf0ede4ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 10 00:48:15.151617 env[1215]: time="2025-05-10T00:48:15.151541595Z" level=info msg="CreateContainer within sandbox \"6da701fdbd9801e5ae7185b3ea9647a5b45986b0e988d5a9cf94cf5ed7704078\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"164e7743e9eb733adf2894a7c93f6398cd0b45134422ab49a5cbcf1fab72771c\"" May 10 00:48:15.152239 env[1215]: time="2025-05-10T00:48:15.152211925Z" level=info msg="StartContainer for \"164e7743e9eb733adf2894a7c93f6398cd0b45134422ab49a5cbcf1fab72771c\"" May 10 00:48:15.166638 systemd[1]: Started cri-containerd-164e7743e9eb733adf2894a7c93f6398cd0b45134422ab49a5cbcf1fab72771c.scope. May 10 00:48:15.168548 env[1215]: time="2025-05-10T00:48:15.167297897Z" level=info msg="CreateContainer within sandbox \"ad0b0c64d9fca72dba5e101f861fd9ed6af234affcc2b94bf3f22b4cf0ede4ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d3b193a6063957906fdc34208d09573410b0f37fa5b9974aeae1d25840ea106\"" May 10 00:48:15.168548 env[1215]: time="2025-05-10T00:48:15.168218896Z" level=info msg="StartContainer for \"0d3b193a6063957906fdc34208d09573410b0f37fa5b9974aeae1d25840ea106\"" May 10 00:48:15.190828 systemd[1]: Started cri-containerd-0d3b193a6063957906fdc34208d09573410b0f37fa5b9974aeae1d25840ea106.scope. May 10 00:48:15.271557 env[1215]: time="2025-05-10T00:48:15.271500328Z" level=info msg="StartContainer for \"164e7743e9eb733adf2894a7c93f6398cd0b45134422ab49a5cbcf1fab72771c\" returns successfully" May 10 00:48:15.283112 env[1215]: time="2025-05-10T00:48:15.283060746Z" level=info msg="StartContainer for \"0d3b193a6063957906fdc34208d09573410b0f37fa5b9974aeae1d25840ea106\" returns successfully" May 10 00:48:15.393619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount787889935.mount: Deactivated successfully. May 10 00:48:15.874402 kubelet[1921]: E0510 00:48:15.874177 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:15.875681 kubelet[1921]: E0510 00:48:15.875645 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:15.880201 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:35756.service. May 10 00:48:15.906815 kubelet[1921]: I0510 00:48:15.906750 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xfv82" podStartSLOduration=29.90672733 podStartE2EDuration="29.90672733s" podCreationTimestamp="2025-05-10 00:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:48:15.906251867 +0000 UTC m=+37.306569053" watchObservedRunningTime="2025-05-10 00:48:15.90672733 +0000 UTC m=+37.307044536" May 10 00:48:15.916914 sshd[3331]: Accepted publickey for core from 10.0.0.1 port 35756 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:15.918181 sshd[3331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:15.923572 systemd[1]: Started session-8.scope. May 10 00:48:15.924043 systemd-logind[1204]: New session 8 of user core. May 10 00:48:16.039594 sshd[3331]: pam_unix(sshd:session): session closed for user core May 10 00:48:16.041671 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:35756.service: Deactivated successfully. May 10 00:48:16.042496 systemd[1]: session-8.scope: Deactivated successfully. May 10 00:48:16.043198 systemd-logind[1204]: Session 8 logged out. Waiting for processes to exit. May 10 00:48:16.043802 systemd-logind[1204]: Removed session 8. May 10 00:48:16.877195 kubelet[1921]: E0510 00:48:16.877154 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:16.877195 kubelet[1921]: E0510 00:48:16.877169 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:17.879455 kubelet[1921]: E0510 00:48:17.879421 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:17.879913 kubelet[1921]: E0510 00:48:17.879482 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:48:21.044904 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:54162.service. May 10 00:48:21.080248 sshd[3356]: Accepted publickey for core from 10.0.0.1 port 54162 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:21.081342 sshd[3356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:21.084578 systemd-logind[1204]: New session 9 of user core. May 10 00:48:21.085597 systemd[1]: Started session-9.scope. May 10 00:48:21.189512 sshd[3356]: pam_unix(sshd:session): session closed for user core May 10 00:48:21.191657 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:54162.service: Deactivated successfully. May 10 00:48:21.192534 systemd[1]: session-9.scope: Deactivated successfully. May 10 00:48:21.193557 systemd-logind[1204]: Session 9 logged out. Waiting for processes to exit. May 10 00:48:21.194331 systemd-logind[1204]: Removed session 9. May 10 00:48:26.194505 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:54164.service. May 10 00:48:26.231833 sshd[3370]: Accepted publickey for core from 10.0.0.1 port 54164 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:26.233107 sshd[3370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:26.236799 systemd-logind[1204]: New session 10 of user core. May 10 00:48:26.237849 systemd[1]: Started session-10.scope. May 10 00:48:26.337087 sshd[3370]: pam_unix(sshd:session): session closed for user core May 10 00:48:26.339951 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:54164.service: Deactivated successfully. May 10 00:48:26.340463 systemd[1]: session-10.scope: Deactivated successfully. May 10 00:48:26.341129 systemd-logind[1204]: Session 10 logged out. Waiting for processes to exit. May 10 00:48:26.342716 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:54170.service. May 10 00:48:26.343432 systemd-logind[1204]: Removed session 10. May 10 00:48:26.376508 sshd[3386]: Accepted publickey for core from 10.0.0.1 port 54170 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:26.377639 sshd[3386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:26.380833 systemd-logind[1204]: New session 11 of user core. May 10 00:48:26.381567 systemd[1]: Started session-11.scope. May 10 00:48:26.538762 sshd[3386]: pam_unix(sshd:session): session closed for user core May 10 00:48:26.544450 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:59868.service. May 10 00:48:26.545161 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:54170.service: Deactivated successfully. May 10 00:48:26.545873 systemd[1]: session-11.scope: Deactivated successfully. May 10 00:48:26.548998 systemd-logind[1204]: Session 11 logged out. Waiting for processes to exit. May 10 00:48:26.551530 systemd-logind[1204]: Removed session 11. May 10 00:48:26.584863 sshd[3396]: Accepted publickey for core from 10.0.0.1 port 59868 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:26.586073 sshd[3396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:26.589617 systemd-logind[1204]: New session 12 of user core. May 10 00:48:26.590440 systemd[1]: Started session-12.scope. May 10 00:48:26.700072 sshd[3396]: pam_unix(sshd:session): session closed for user core May 10 00:48:26.702549 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:59868.service: Deactivated successfully. May 10 00:48:26.703426 systemd[1]: session-12.scope: Deactivated successfully. May 10 00:48:26.704093 systemd-logind[1204]: Session 12 logged out. Waiting for processes to exit. May 10 00:48:26.705142 systemd-logind[1204]: Removed session 12. May 10 00:48:31.705400 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:59884.service. May 10 00:48:31.740316 sshd[3410]: Accepted publickey for core from 10.0.0.1 port 59884 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:31.741680 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:31.745327 systemd-logind[1204]: New session 13 of user core. May 10 00:48:31.746165 systemd[1]: Started session-13.scope. May 10 00:48:31.855664 sshd[3410]: pam_unix(sshd:session): session closed for user core May 10 00:48:31.858060 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:59884.service: Deactivated successfully. May 10 00:48:31.858796 systemd[1]: session-13.scope: Deactivated successfully. May 10 00:48:31.859307 systemd-logind[1204]: Session 13 logged out. Waiting for processes to exit. May 10 00:48:31.860035 systemd-logind[1204]: Removed session 13. May 10 00:48:36.860437 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:35938.service. May 10 00:48:36.894060 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 35938 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:36.895058 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:36.898304 systemd-logind[1204]: New session 14 of user core. May 10 00:48:36.899032 systemd[1]: Started session-14.scope. May 10 00:48:37.008717 sshd[3423]: pam_unix(sshd:session): session closed for user core May 10 00:48:37.011574 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:35938.service: Deactivated successfully. May 10 00:48:37.012096 systemd[1]: session-14.scope: Deactivated successfully. May 10 00:48:37.012684 systemd-logind[1204]: Session 14 logged out. Waiting for processes to exit. May 10 00:48:37.013711 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:35942.service. May 10 00:48:37.014503 systemd-logind[1204]: Removed session 14. May 10 00:48:37.048261 sshd[3436]: Accepted publickey for core from 10.0.0.1 port 35942 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:37.049359 sshd[3436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:37.052387 systemd-logind[1204]: New session 15 of user core. May 10 00:48:37.053244 systemd[1]: Started session-15.scope. May 10 00:48:37.255220 sshd[3436]: pam_unix(sshd:session): session closed for user core May 10 00:48:37.261201 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:35954.service. May 10 00:48:37.261940 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:35942.service: Deactivated successfully. May 10 00:48:37.262778 systemd[1]: session-15.scope: Deactivated successfully. May 10 00:48:37.263754 systemd-logind[1204]: Session 15 logged out. Waiting for processes to exit. May 10 00:48:37.264595 systemd-logind[1204]: Removed session 15. May 10 00:48:37.296321 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 35954 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:37.297290 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:37.300208 systemd-logind[1204]: New session 16 of user core. May 10 00:48:37.300928 systemd[1]: Started session-16.scope. May 10 00:48:38.392071 sshd[3446]: pam_unix(sshd:session): session closed for user core May 10 00:48:38.395230 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:35956.service. May 10 00:48:38.395652 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:35954.service: Deactivated successfully. May 10 00:48:38.396198 systemd[1]: session-16.scope: Deactivated successfully. May 10 00:48:38.396908 systemd-logind[1204]: Session 16 logged out. Waiting for processes to exit. May 10 00:48:38.398315 systemd-logind[1204]: Removed session 16. May 10 00:48:38.434415 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 35956 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:38.435807 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:38.439962 systemd-logind[1204]: New session 17 of user core. May 10 00:48:38.440932 systemd[1]: Started session-17.scope. May 10 00:48:38.687624 sshd[3463]: pam_unix(sshd:session): session closed for user core May 10 00:48:38.691479 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:35958.service. May 10 00:48:38.700815 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:35956.service: Deactivated successfully. May 10 00:48:38.701637 systemd[1]: session-17.scope: Deactivated successfully. May 10 00:48:38.706308 systemd-logind[1204]: Session 17 logged out. Waiting for processes to exit. May 10 00:48:38.709577 systemd-logind[1204]: Removed session 17. May 10 00:48:38.739332 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 35958 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:38.740431 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:38.743664 systemd-logind[1204]: New session 18 of user core. May 10 00:48:38.744453 systemd[1]: Started session-18.scope. May 10 00:48:38.852334 sshd[3475]: pam_unix(sshd:session): session closed for user core May 10 00:48:38.854659 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:35958.service: Deactivated successfully. May 10 00:48:38.855597 systemd[1]: session-18.scope: Deactivated successfully. May 10 00:48:38.856357 systemd-logind[1204]: Session 18 logged out. Waiting for processes to exit. May 10 00:48:38.857198 systemd-logind[1204]: Removed session 18. May 10 00:48:43.856349 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:35968.service. May 10 00:48:43.891375 sshd[3491]: Accepted publickey for core from 10.0.0.1 port 35968 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:43.892661 sshd[3491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:43.896193 systemd-logind[1204]: New session 19 of user core. May 10 00:48:43.896879 systemd[1]: Started session-19.scope. May 10 00:48:44.001426 sshd[3491]: pam_unix(sshd:session): session closed for user core May 10 00:48:44.003491 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:35968.service: Deactivated successfully. May 10 00:48:44.004210 systemd[1]: session-19.scope: Deactivated successfully. May 10 00:48:44.004750 systemd-logind[1204]: Session 19 logged out. Waiting for processes to exit. May 10 00:48:44.005574 systemd-logind[1204]: Removed session 19. May 10 00:48:49.005543 systemd[1]: Started sshd@19-10.0.0.133:22-10.0.0.1:51860.service. May 10 00:48:49.037334 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 51860 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:49.038399 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:49.042073 systemd-logind[1204]: New session 20 of user core. May 10 00:48:49.042977 systemd[1]: Started session-20.scope. May 10 00:48:49.156704 sshd[3510]: pam_unix(sshd:session): session closed for user core May 10 00:48:49.159475 systemd[1]: sshd@19-10.0.0.133:22-10.0.0.1:51860.service: Deactivated successfully. May 10 00:48:49.160181 systemd[1]: session-20.scope: Deactivated successfully. May 10 00:48:49.160920 systemd-logind[1204]: Session 20 logged out. Waiting for processes to exit. May 10 00:48:49.161588 systemd-logind[1204]: Removed session 20. May 10 00:48:54.161779 systemd[1]: Started sshd@20-10.0.0.133:22-10.0.0.1:51876.service. May 10 00:48:54.197115 sshd[3524]: Accepted publickey for core from 10.0.0.1 port 51876 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:54.198210 sshd[3524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:54.201632 systemd-logind[1204]: New session 21 of user core. May 10 00:48:54.202413 systemd[1]: Started session-21.scope. May 10 00:48:54.309817 sshd[3524]: pam_unix(sshd:session): session closed for user core May 10 00:48:54.312738 systemd[1]: sshd@20-10.0.0.133:22-10.0.0.1:51876.service: Deactivated successfully. May 10 00:48:54.313552 systemd[1]: session-21.scope: Deactivated successfully. May 10 00:48:54.314439 systemd-logind[1204]: Session 21 logged out. Waiting for processes to exit. May 10 00:48:54.315106 systemd-logind[1204]: Removed session 21. May 10 00:48:59.313907 systemd[1]: Started sshd@21-10.0.0.133:22-10.0.0.1:49440.service. May 10 00:48:59.385387 sshd[3537]: Accepted publickey for core from 10.0.0.1 port 49440 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:59.386281 sshd[3537]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:59.389673 systemd-logind[1204]: New session 22 of user core. May 10 00:48:59.390640 systemd[1]: Started session-22.scope. May 10 00:48:59.490827 sshd[3537]: pam_unix(sshd:session): session closed for user core May 10 00:48:59.493658 systemd[1]: sshd@21-10.0.0.133:22-10.0.0.1:49440.service: Deactivated successfully. May 10 00:48:59.494347 systemd[1]: session-22.scope: Deactivated successfully. May 10 00:48:59.494882 systemd-logind[1204]: Session 22 logged out. Waiting for processes to exit. May 10 00:48:59.495987 systemd[1]: Started sshd@22-10.0.0.133:22-10.0.0.1:49446.service. May 10 00:48:59.496683 systemd-logind[1204]: Removed session 22. May 10 00:48:59.527817 sshd[3550]: Accepted publickey for core from 10.0.0.1 port 49446 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:48:59.528987 sshd[3550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:48:59.532257 systemd-logind[1204]: New session 23 of user core. May 10 00:48:59.533201 systemd[1]: Started session-23.scope. May 10 00:49:00.715326 kubelet[1921]: E0510 00:49:00.715280 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:01.102580 kubelet[1921]: I0510 00:49:01.102396 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-nj9h2" podStartSLOduration=75.102377167 podStartE2EDuration="1m15.102377167s" podCreationTimestamp="2025-05-10 00:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:48:15.933478486 +0000 UTC m=+37.333795662" watchObservedRunningTime="2025-05-10 00:49:01.102377167 +0000 UTC m=+82.502694343" May 10 00:49:01.133764 env[1215]: time="2025-05-10T00:49:01.133688533Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 10 00:49:01.139447 env[1215]: time="2025-05-10T00:49:01.139406354Z" level=info msg="StopContainer for \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\" with timeout 2 (s)" May 10 00:49:01.139713 env[1215]: time="2025-05-10T00:49:01.139666839Z" level=info msg="Stop container \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\" with signal terminated" May 10 00:49:01.147825 systemd-networkd[1037]: lxc_health: Link DOWN May 10 00:49:01.147833 systemd-networkd[1037]: lxc_health: Lost carrier May 10 00:49:01.150211 env[1215]: time="2025-05-10T00:49:01.149866670Z" level=info msg="StopContainer for \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\" with timeout 30 (s)" May 10 00:49:01.150594 env[1215]: time="2025-05-10T00:49:01.150564047Z" level=info msg="Stop container \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\" with signal terminated" May 10 00:49:01.158284 systemd[1]: cri-containerd-b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2.scope: Deactivated successfully. May 10 00:49:01.181530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2-rootfs.mount: Deactivated successfully. May 10 00:49:01.193670 systemd[1]: cri-containerd-1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9.scope: Deactivated successfully. May 10 00:49:01.194014 systemd[1]: cri-containerd-1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9.scope: Consumed 6.097s CPU time. May 10 00:49:01.210290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9-rootfs.mount: Deactivated successfully. May 10 00:49:01.306079 env[1215]: time="2025-05-10T00:49:01.306016267Z" level=info msg="shim disconnected" id=1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9 May 10 00:49:01.306079 env[1215]: time="2025-05-10T00:49:01.306057716Z" level=warning msg="cleaning up after shim disconnected" id=1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9 namespace=k8s.io May 10 00:49:01.306079 env[1215]: time="2025-05-10T00:49:01.306065982Z" level=info msg="cleaning up dead shim" May 10 00:49:01.306356 env[1215]: time="2025-05-10T00:49:01.306318894Z" level=info msg="shim disconnected" id=b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2 May 10 00:49:01.306356 env[1215]: time="2025-05-10T00:49:01.306355052Z" level=warning msg="cleaning up after shim disconnected" id=b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2 namespace=k8s.io May 10 00:49:01.306450 env[1215]: time="2025-05-10T00:49:01.306371643Z" level=info msg="cleaning up dead shim" May 10 00:49:01.312857 env[1215]: time="2025-05-10T00:49:01.312803263Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3618 runtime=io.containerd.runc.v2\n" May 10 00:49:01.313848 env[1215]: time="2025-05-10T00:49:01.313806543Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3617 runtime=io.containerd.runc.v2\n" May 10 00:49:01.373094 env[1215]: time="2025-05-10T00:49:01.373012383Z" level=info msg="StopContainer for \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\" returns successfully" May 10 00:49:01.373593 env[1215]: time="2025-05-10T00:49:01.373519127Z" level=info msg="StopPodSandbox for \"a09b8b052bce70aff98aa828b501bf255672f65c4d6002c49f17de70c40b91da\"" May 10 00:49:01.373593 env[1215]: time="2025-05-10T00:49:01.373582507Z" level=info msg="Container to stop \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:49:01.375605 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a09b8b052bce70aff98aa828b501bf255672f65c4d6002c49f17de70c40b91da-shm.mount: Deactivated successfully. May 10 00:49:01.378862 systemd[1]: cri-containerd-a09b8b052bce70aff98aa828b501bf255672f65c4d6002c49f17de70c40b91da.scope: Deactivated successfully. May 10 00:49:01.394805 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a09b8b052bce70aff98aa828b501bf255672f65c4d6002c49f17de70c40b91da-rootfs.mount: Deactivated successfully. May 10 00:49:01.426675 env[1215]: time="2025-05-10T00:49:01.426622885Z" level=info msg="StopContainer for \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\" returns successfully" May 10 00:49:01.427318 env[1215]: time="2025-05-10T00:49:01.427241272Z" level=info msg="StopPodSandbox for \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\"" May 10 00:49:01.427406 env[1215]: time="2025-05-10T00:49:01.427359817Z" level=info msg="Container to stop \"8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:49:01.427406 env[1215]: time="2025-05-10T00:49:01.427386048Z" level=info msg="Container to stop \"7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:49:01.427406 env[1215]: time="2025-05-10T00:49:01.427396888Z" level=info msg="Container to stop \"6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:49:01.427406 env[1215]: time="2025-05-10T00:49:01.427407518Z" level=info msg="Container to stop \"876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:49:01.427556 env[1215]: time="2025-05-10T00:49:01.427419642Z" level=info msg="Container to stop \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 10 00:49:01.432314 systemd[1]: cri-containerd-faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda.scope: Deactivated successfully. May 10 00:49:01.469275 env[1215]: time="2025-05-10T00:49:01.469191024Z" level=info msg="shim disconnected" id=a09b8b052bce70aff98aa828b501bf255672f65c4d6002c49f17de70c40b91da May 10 00:49:01.469275 env[1215]: time="2025-05-10T00:49:01.469254645Z" level=warning msg="cleaning up after shim disconnected" id=a09b8b052bce70aff98aa828b501bf255672f65c4d6002c49f17de70c40b91da namespace=k8s.io May 10 00:49:01.469275 env[1215]: time="2025-05-10T00:49:01.469265415Z" level=info msg="cleaning up dead shim" May 10 00:49:01.472772 env[1215]: time="2025-05-10T00:49:01.472712785Z" level=info msg="shim disconnected" id=faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda May 10 00:49:01.473005 env[1215]: time="2025-05-10T00:49:01.472978943Z" level=warning msg="cleaning up after shim disconnected" id=faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda namespace=k8s.io May 10 00:49:01.473464 env[1215]: time="2025-05-10T00:49:01.473442704Z" level=info msg="cleaning up dead shim" May 10 00:49:01.478173 env[1215]: time="2025-05-10T00:49:01.478125356Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3678 runtime=io.containerd.runc.v2\n" May 10 00:49:01.478778 env[1215]: time="2025-05-10T00:49:01.478753412Z" level=info msg="TearDown network for sandbox \"a09b8b052bce70aff98aa828b501bf255672f65c4d6002c49f17de70c40b91da\" successfully" May 10 00:49:01.478905 env[1215]: time="2025-05-10T00:49:01.478868921Z" level=info msg="StopPodSandbox for \"a09b8b052bce70aff98aa828b501bf255672f65c4d6002c49f17de70c40b91da\" returns successfully" May 10 00:49:01.483094 env[1215]: time="2025-05-10T00:49:01.483057151Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3686 runtime=io.containerd.runc.v2\n" May 10 00:49:01.483745 env[1215]: time="2025-05-10T00:49:01.483617518Z" level=info msg="TearDown network for sandbox \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\" successfully" May 10 00:49:01.483745 env[1215]: time="2025-05-10T00:49:01.483650130Z" level=info msg="StopPodSandbox for \"faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda\" returns successfully" May 10 00:49:01.519296 kubelet[1921]: I0510 00:49:01.519229 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-cilium-cgroup\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.519296 kubelet[1921]: I0510 00:49:01.519289 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-cni-path\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.519519 kubelet[1921]: I0510 00:49:01.519312 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-lib-modules\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.519519 kubelet[1921]: I0510 00:49:01.519334 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-bpf-maps\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.519519 kubelet[1921]: I0510 00:49:01.519355 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-host-proc-sys-net\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.519519 kubelet[1921]: I0510 00:49:01.519385 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-hostproc\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.519519 kubelet[1921]: I0510 00:49:01.519418 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d170a372-209b-4e79-90ab-df5c2f0618e7-cilium-config-path\") pod \"d170a372-209b-4e79-90ab-df5c2f0618e7\" (UID: \"d170a372-209b-4e79-90ab-df5c2f0618e7\") " May 10 00:49:01.519519 kubelet[1921]: I0510 00:49:01.519445 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a53273fb-c04c-4bcc-b838-3363ef018074-hubble-tls\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.519660 kubelet[1921]: I0510 00:49:01.519463 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-etc-cni-netd\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.519660 kubelet[1921]: I0510 00:49:01.519485 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a53273fb-c04c-4bcc-b838-3363ef018074-cilium-config-path\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.519660 kubelet[1921]: I0510 00:49:01.519504 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-host-proc-sys-kernel\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.519660 kubelet[1921]: I0510 00:49:01.519525 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg9jx\" (UniqueName: \"kubernetes.io/projected/a53273fb-c04c-4bcc-b838-3363ef018074-kube-api-access-pg9jx\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.519660 kubelet[1921]: I0510 00:49:01.519550 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7scp4\" (UniqueName: \"kubernetes.io/projected/d170a372-209b-4e79-90ab-df5c2f0618e7-kube-api-access-7scp4\") pod \"d170a372-209b-4e79-90ab-df5c2f0618e7\" (UID: \"d170a372-209b-4e79-90ab-df5c2f0618e7\") " May 10 00:49:01.520606 kubelet[1921]: I0510 00:49:01.520574 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:01.520702 kubelet[1921]: I0510 00:49:01.520578 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-cni-path" (OuterVolumeSpecName: "cni-path") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:01.520778 kubelet[1921]: I0510 00:49:01.520575 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:01.520853 kubelet[1921]: I0510 00:49:01.520628 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-hostproc" (OuterVolumeSpecName: "hostproc") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:01.520953 kubelet[1921]: I0510 00:49:01.520643 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:01.521031 kubelet[1921]: I0510 00:49:01.520852 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:01.521101 kubelet[1921]: I0510 00:49:01.520869 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:01.521243 kubelet[1921]: I0510 00:49:01.521223 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:01.523261 kubelet[1921]: I0510 00:49:01.523230 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d170a372-209b-4e79-90ab-df5c2f0618e7-kube-api-access-7scp4" (OuterVolumeSpecName: "kube-api-access-7scp4") pod "d170a372-209b-4e79-90ab-df5c2f0618e7" (UID: "d170a372-209b-4e79-90ab-df5c2f0618e7"). InnerVolumeSpecName "kube-api-access-7scp4". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:49:01.523748 kubelet[1921]: I0510 00:49:01.523721 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53273fb-c04c-4bcc-b838-3363ef018074-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:49:01.523813 kubelet[1921]: I0510 00:49:01.523759 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d170a372-209b-4e79-90ab-df5c2f0618e7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d170a372-209b-4e79-90ab-df5c2f0618e7" (UID: "d170a372-209b-4e79-90ab-df5c2f0618e7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:49:01.525307 kubelet[1921]: I0510 00:49:01.525278 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a53273fb-c04c-4bcc-b838-3363ef018074-kube-api-access-pg9jx" (OuterVolumeSpecName: "kube-api-access-pg9jx") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "kube-api-access-pg9jx". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:49:01.526247 kubelet[1921]: I0510 00:49:01.526209 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a53273fb-c04c-4bcc-b838-3363ef018074-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:49:01.620614 kubelet[1921]: I0510 00:49:01.620541 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-cilium-run\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.620614 kubelet[1921]: I0510 00:49:01.620604 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-xtables-lock\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.620816 kubelet[1921]: I0510 00:49:01.620635 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a53273fb-c04c-4bcc-b838-3363ef018074-clustermesh-secrets\") pod \"a53273fb-c04c-4bcc-b838-3363ef018074\" (UID: \"a53273fb-c04c-4bcc-b838-3363ef018074\") " May 10 00:49:01.620816 kubelet[1921]: I0510 00:49:01.620691 1921 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d170a372-209b-4e79-90ab-df5c2f0618e7-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.620816 kubelet[1921]: I0510 00:49:01.620675 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:01.620816 kubelet[1921]: I0510 00:49:01.620706 1921 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a53273fb-c04c-4bcc-b838-3363ef018074-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.620816 kubelet[1921]: I0510 00:49:01.620762 1921 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.620816 kubelet[1921]: I0510 00:49:01.620776 1921 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a53273fb-c04c-4bcc-b838-3363ef018074-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.620816 kubelet[1921]: I0510 00:49:01.620787 1921 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.621060 kubelet[1921]: I0510 00:49:01.620798 1921 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pg9jx\" (UniqueName: \"kubernetes.io/projected/a53273fb-c04c-4bcc-b838-3363ef018074-kube-api-access-pg9jx\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.621060 kubelet[1921]: I0510 00:49:01.620809 1921 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7scp4\" (UniqueName: \"kubernetes.io/projected/d170a372-209b-4e79-90ab-df5c2f0618e7-kube-api-access-7scp4\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.621060 kubelet[1921]: I0510 00:49:01.620819 1921 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.621060 kubelet[1921]: I0510 00:49:01.620829 1921 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-cni-path\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.621060 kubelet[1921]: I0510 00:49:01.620839 1921 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-lib-modules\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.621060 kubelet[1921]: I0510 00:49:01.620852 1921 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.621060 kubelet[1921]: I0510 00:49:01.620862 1921 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.621060 kubelet[1921]: I0510 00:49:01.620873 1921 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-hostproc\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.621256 kubelet[1921]: I0510 00:49:01.620913 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:01.623786 kubelet[1921]: I0510 00:49:01.623671 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a53273fb-c04c-4bcc-b838-3363ef018074-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a53273fb-c04c-4bcc-b838-3363ef018074" (UID: "a53273fb-c04c-4bcc-b838-3363ef018074"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:49:01.721712 kubelet[1921]: I0510 00:49:01.721662 1921 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a53273fb-c04c-4bcc-b838-3363ef018074-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.721712 kubelet[1921]: I0510 00:49:01.721691 1921 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-cilium-run\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.721712 kubelet[1921]: I0510 00:49:01.721699 1921 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a53273fb-c04c-4bcc-b838-3363ef018074-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 10 00:49:01.958770 kubelet[1921]: I0510 00:49:01.958744 1921 scope.go:117] "RemoveContainer" containerID="1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9" May 10 00:49:01.960069 env[1215]: time="2025-05-10T00:49:01.960012901Z" level=info msg="RemoveContainer for \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\"" May 10 00:49:01.962136 systemd[1]: Removed slice kubepods-burstable-poda53273fb_c04c_4bcc_b838_3363ef018074.slice. May 10 00:49:01.962218 systemd[1]: kubepods-burstable-poda53273fb_c04c_4bcc_b838_3363ef018074.slice: Consumed 6.193s CPU time. May 10 00:49:01.963353 systemd[1]: Removed slice kubepods-besteffort-podd170a372_209b_4e79_90ab_df5c2f0618e7.slice. May 10 00:49:02.054060 env[1215]: time="2025-05-10T00:49:02.053978685Z" level=info msg="RemoveContainer for \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\" returns successfully" May 10 00:49:02.054453 kubelet[1921]: I0510 00:49:02.054411 1921 scope.go:117] "RemoveContainer" containerID="876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82" May 10 00:49:02.055747 env[1215]: time="2025-05-10T00:49:02.055718404Z" level=info msg="RemoveContainer for \"876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82\"" May 10 00:49:02.120195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda-rootfs.mount: Deactivated successfully. May 10 00:49:02.120291 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-faa619bb69222cc1835a79909c19b639999e262d3ab9c05ea8b251388f243dda-shm.mount: Deactivated successfully. May 10 00:49:02.120346 systemd[1]: var-lib-kubelet-pods-a53273fb\x2dc04c\x2d4bcc\x2db838\x2d3363ef018074-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpg9jx.mount: Deactivated successfully. May 10 00:49:02.120407 systemd[1]: var-lib-kubelet-pods-d170a372\x2d209b\x2d4e79\x2d90ab\x2ddf5c2f0618e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7scp4.mount: Deactivated successfully. May 10 00:49:02.120463 systemd[1]: var-lib-kubelet-pods-a53273fb\x2dc04c\x2d4bcc\x2db838\x2d3363ef018074-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:49:02.120511 systemd[1]: var-lib-kubelet-pods-a53273fb\x2dc04c\x2d4bcc\x2db838\x2d3363ef018074-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:49:02.123907 env[1215]: time="2025-05-10T00:49:02.123849154Z" level=info msg="RemoveContainer for \"876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82\" returns successfully" May 10 00:49:02.124175 kubelet[1921]: I0510 00:49:02.124145 1921 scope.go:117] "RemoveContainer" containerID="6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8" May 10 00:49:02.125289 env[1215]: time="2025-05-10T00:49:02.125253275Z" level=info msg="RemoveContainer for \"6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8\"" May 10 00:49:02.179927 env[1215]: time="2025-05-10T00:49:02.179850418Z" level=info msg="RemoveContainer for \"6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8\" returns successfully" May 10 00:49:02.180430 kubelet[1921]: I0510 00:49:02.180147 1921 scope.go:117] "RemoveContainer" containerID="7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2" May 10 00:49:02.181293 env[1215]: time="2025-05-10T00:49:02.181261934Z" level=info msg="RemoveContainer for \"7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2\"" May 10 00:49:02.275630 env[1215]: time="2025-05-10T00:49:02.275511921Z" level=info msg="RemoveContainer for \"7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2\" returns successfully" May 10 00:49:02.276155 kubelet[1921]: I0510 00:49:02.276109 1921 scope.go:117] "RemoveContainer" containerID="8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b" May 10 00:49:02.277316 env[1215]: time="2025-05-10T00:49:02.277291526Z" level=info msg="RemoveContainer for \"8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b\"" May 10 00:49:02.370746 env[1215]: time="2025-05-10T00:49:02.370676258Z" level=info msg="RemoveContainer for \"8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b\" returns successfully" May 10 00:49:02.371063 kubelet[1921]: I0510 00:49:02.371015 1921 scope.go:117] "RemoveContainer" containerID="1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9" May 10 00:49:02.371408 env[1215]: time="2025-05-10T00:49:02.371312097Z" level=error msg="ContainerStatus for \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\": not found" May 10 00:49:02.371634 kubelet[1921]: E0510 00:49:02.371579 1921 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\": not found" containerID="1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9" May 10 00:49:02.371795 kubelet[1921]: I0510 00:49:02.371631 1921 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9"} err="failed to get container status \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\": rpc error: code = NotFound desc = an error occurred when try to find container \"1ffe6f193d580c50db6ecdb8d2d1741a14e68a7c291ddf9304980ad506b52fd9\": not found" May 10 00:49:02.371795 kubelet[1921]: I0510 00:49:02.371720 1921 scope.go:117] "RemoveContainer" containerID="876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82" May 10 00:49:02.372001 env[1215]: time="2025-05-10T00:49:02.371952075Z" level=error msg="ContainerStatus for \"876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82\": not found" May 10 00:49:02.372135 kubelet[1921]: E0510 00:49:02.372112 1921 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82\": not found" containerID="876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82" May 10 00:49:02.372173 kubelet[1921]: I0510 00:49:02.372149 1921 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82"} err="failed to get container status \"876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82\": rpc error: code = NotFound desc = an error occurred when try to find container \"876a9a7d3e92aaa651510144184c0cb24fcd29b46bfa04c7617cd0cdb0f69f82\": not found" May 10 00:49:02.372197 kubelet[1921]: I0510 00:49:02.372175 1921 scope.go:117] "RemoveContainer" containerID="6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8" May 10 00:49:02.372413 env[1215]: time="2025-05-10T00:49:02.372346766Z" level=error msg="ContainerStatus for \"6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8\": not found" May 10 00:49:02.372536 kubelet[1921]: E0510 00:49:02.372512 1921 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8\": not found" containerID="6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8" May 10 00:49:02.372599 kubelet[1921]: I0510 00:49:02.372534 1921 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8"} err="failed to get container status \"6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"6480fde9698a104ef28935af133a6a5553f7706e63334b4a0ed78977d543b1a8\": not found" May 10 00:49:02.372599 kubelet[1921]: I0510 00:49:02.372549 1921 scope.go:117] "RemoveContainer" containerID="7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2" May 10 00:49:02.372750 env[1215]: time="2025-05-10T00:49:02.372699186Z" level=error msg="ContainerStatus for \"7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2\": not found" May 10 00:49:02.372865 kubelet[1921]: E0510 00:49:02.372845 1921 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2\": not found" containerID="7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2" May 10 00:49:02.372935 kubelet[1921]: I0510 00:49:02.372872 1921 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2"} err="failed to get container status \"7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"7fbfae3a824c084a747c5620965ad79c74d2737f516de32f09eca25523c8c7d2\": not found" May 10 00:49:02.372935 kubelet[1921]: I0510 00:49:02.372906 1921 scope.go:117] "RemoveContainer" containerID="8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b" May 10 00:49:02.373126 env[1215]: time="2025-05-10T00:49:02.373076955Z" level=error msg="ContainerStatus for \"8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b\": not found" May 10 00:49:02.373298 kubelet[1921]: E0510 00:49:02.373268 1921 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b\": not found" containerID="8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b" May 10 00:49:02.373399 kubelet[1921]: I0510 00:49:02.373304 1921 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b"} err="failed to get container status \"8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b\": rpc error: code = NotFound desc = an error occurred when try to find container \"8da72a79ee3290b6614465d71114c49ef874fd7ec9c36d23e730db1fb2b1557b\": not found" May 10 00:49:02.373399 kubelet[1921]: I0510 00:49:02.373325 1921 scope.go:117] "RemoveContainer" containerID="b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2" May 10 00:49:02.374472 env[1215]: time="2025-05-10T00:49:02.374446260Z" level=info msg="RemoveContainer for \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\"" May 10 00:49:02.417486 env[1215]: time="2025-05-10T00:49:02.417430199Z" level=info msg="RemoveContainer for \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\" returns successfully" May 10 00:49:02.417717 kubelet[1921]: I0510 00:49:02.417679 1921 scope.go:117] "RemoveContainer" containerID="b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2" May 10 00:49:02.418035 env[1215]: time="2025-05-10T00:49:02.417967993Z" level=error msg="ContainerStatus for \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\": not found" May 10 00:49:02.418121 kubelet[1921]: E0510 00:49:02.418106 1921 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\": not found" containerID="b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2" May 10 00:49:02.418155 kubelet[1921]: I0510 00:49:02.418134 1921 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2"} err="failed to get container status \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4db54b06e2fa748028bcb62712a1b6700475fc96516b7df93217843966bb9e2\": not found" May 10 00:49:02.717266 kubelet[1921]: I0510 00:49:02.717217 1921 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a53273fb-c04c-4bcc-b838-3363ef018074" path="/var/lib/kubelet/pods/a53273fb-c04c-4bcc-b838-3363ef018074/volumes" May 10 00:49:02.717724 kubelet[1921]: I0510 00:49:02.717699 1921 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d170a372-209b-4e79-90ab-df5c2f0618e7" path="/var/lib/kubelet/pods/d170a372-209b-4e79-90ab-df5c2f0618e7/volumes" May 10 00:49:02.834162 sshd[3550]: pam_unix(sshd:session): session closed for user core May 10 00:49:02.836719 systemd[1]: sshd@22-10.0.0.133:22-10.0.0.1:49446.service: Deactivated successfully. May 10 00:49:02.837288 systemd[1]: session-23.scope: Deactivated successfully. May 10 00:49:02.837774 systemd-logind[1204]: Session 23 logged out. Waiting for processes to exit. May 10 00:49:02.838873 systemd[1]: Started sshd@23-10.0.0.133:22-10.0.0.1:49460.service. May 10 00:49:02.839530 systemd-logind[1204]: Removed session 23. May 10 00:49:02.877195 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 49460 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:49:02.878195 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:02.881595 systemd-logind[1204]: New session 24 of user core. May 10 00:49:02.882479 systemd[1]: Started session-24.scope. May 10 00:49:03.307785 sshd[3709]: pam_unix(sshd:session): session closed for user core May 10 00:49:03.312130 systemd[1]: Started sshd@24-10.0.0.133:22-10.0.0.1:49466.service. May 10 00:49:03.315966 systemd[1]: sshd@23-10.0.0.133:22-10.0.0.1:49460.service: Deactivated successfully. May 10 00:49:03.316683 systemd[1]: session-24.scope: Deactivated successfully. May 10 00:49:03.317858 systemd-logind[1204]: Session 24 logged out. Waiting for processes to exit. May 10 00:49:03.318824 systemd-logind[1204]: Removed session 24. May 10 00:49:03.334553 kubelet[1921]: E0510 00:49:03.334494 1921 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d170a372-209b-4e79-90ab-df5c2f0618e7" containerName="cilium-operator" May 10 00:49:03.334553 kubelet[1921]: E0510 00:49:03.334537 1921 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a53273fb-c04c-4bcc-b838-3363ef018074" containerName="mount-cgroup" May 10 00:49:03.334553 kubelet[1921]: E0510 00:49:03.334546 1921 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a53273fb-c04c-4bcc-b838-3363ef018074" containerName="mount-bpf-fs" May 10 00:49:03.334553 kubelet[1921]: E0510 00:49:03.334552 1921 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a53273fb-c04c-4bcc-b838-3363ef018074" containerName="clean-cilium-state" May 10 00:49:03.334553 kubelet[1921]: E0510 00:49:03.334557 1921 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a53273fb-c04c-4bcc-b838-3363ef018074" containerName="apply-sysctl-overwrites" May 10 00:49:03.334553 kubelet[1921]: E0510 00:49:03.334563 1921 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a53273fb-c04c-4bcc-b838-3363ef018074" containerName="cilium-agent" May 10 00:49:03.335066 kubelet[1921]: I0510 00:49:03.334587 1921 memory_manager.go:354] "RemoveStaleState removing state" podUID="d170a372-209b-4e79-90ab-df5c2f0618e7" containerName="cilium-operator" May 10 00:49:03.335066 kubelet[1921]: I0510 00:49:03.334596 1921 memory_manager.go:354] "RemoveStaleState removing state" podUID="a53273fb-c04c-4bcc-b838-3363ef018074" containerName="cilium-agent" May 10 00:49:03.340510 systemd[1]: Created slice kubepods-burstable-pod2619dc35_572c_4737_96da_90c18e6c2b4c.slice. May 10 00:49:03.353720 sshd[3720]: Accepted publickey for core from 10.0.0.1 port 49466 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:49:03.354861 sshd[3720]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:03.359314 systemd[1]: Started session-25.scope. May 10 00:49:03.359577 systemd-logind[1204]: New session 25 of user core. May 10 00:49:03.432076 kubelet[1921]: I0510 00:49:03.432039 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-host-proc-sys-net\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432076 kubelet[1921]: I0510 00:49:03.432074 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-cni-path\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432275 kubelet[1921]: I0510 00:49:03.432094 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-hostproc\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432275 kubelet[1921]: I0510 00:49:03.432107 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2619dc35-572c-4737-96da-90c18e6c2b4c-clustermesh-secrets\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432275 kubelet[1921]: I0510 00:49:03.432120 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-run\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432275 kubelet[1921]: I0510 00:49:03.432131 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-lib-modules\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432275 kubelet[1921]: I0510 00:49:03.432144 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-config-path\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432275 kubelet[1921]: I0510 00:49:03.432156 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfrnr\" (UniqueName: \"kubernetes.io/projected/2619dc35-572c-4737-96da-90c18e6c2b4c-kube-api-access-rfrnr\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432424 kubelet[1921]: I0510 00:49:03.432168 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-xtables-lock\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432424 kubelet[1921]: I0510 00:49:03.432180 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-ipsec-secrets\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432424 kubelet[1921]: I0510 00:49:03.432191 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-host-proc-sys-kernel\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432424 kubelet[1921]: I0510 00:49:03.432203 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-etc-cni-netd\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432424 kubelet[1921]: I0510 00:49:03.432217 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-bpf-maps\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432424 kubelet[1921]: I0510 00:49:03.432230 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-cgroup\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.432557 kubelet[1921]: I0510 00:49:03.432241 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2619dc35-572c-4737-96da-90c18e6c2b4c-hubble-tls\") pod \"cilium-fj766\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " pod="kube-system/cilium-fj766" May 10 00:49:03.476777 sshd[3720]: pam_unix(sshd:session): session closed for user core May 10 00:49:03.480956 systemd[1]: Started sshd@25-10.0.0.133:22-10.0.0.1:49470.service. May 10 00:49:03.481424 systemd[1]: sshd@24-10.0.0.133:22-10.0.0.1:49466.service: Deactivated successfully. May 10 00:49:03.482932 systemd[1]: session-25.scope: Deactivated successfully. May 10 00:49:03.483699 systemd-logind[1204]: Session 25 logged out. Waiting for processes to exit. May 10 00:49:03.484601 systemd-logind[1204]: Removed session 25. May 10 00:49:03.488844 kubelet[1921]: E0510 00:49:03.488801 1921 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-rfrnr lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-fj766" podUID="2619dc35-572c-4737-96da-90c18e6c2b4c" May 10 00:49:03.524457 sshd[3733]: Accepted publickey for core from 10.0.0.1 port 49470 ssh2: RSA SHA256:YPfNqeDLVNRLKHCWIqCEBm90yIBoYAoMePhSYn7FUn0 May 10 00:49:03.525627 sshd[3733]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 10 00:49:03.530049 systemd-logind[1204]: New session 26 of user core. May 10 00:49:03.530826 systemd[1]: Started session-26.scope. May 10 00:49:03.759495 kubelet[1921]: E0510 00:49:03.759451 1921 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:49:04.034998 kubelet[1921]: I0510 00:49:04.034842 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-xtables-lock\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.034998 kubelet[1921]: I0510 00:49:04.034899 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-ipsec-secrets\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.034998 kubelet[1921]: I0510 00:49:04.034926 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2619dc35-572c-4737-96da-90c18e6c2b4c-clustermesh-secrets\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.034998 kubelet[1921]: I0510 00:49:04.034947 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-config-path\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.034998 kubelet[1921]: I0510 00:49:04.034970 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2619dc35-572c-4737-96da-90c18e6c2b4c-hubble-tls\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.034998 kubelet[1921]: I0510 00:49:04.034991 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-host-proc-sys-net\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.035308 kubelet[1921]: I0510 00:49:04.035009 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-cni-path\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.035308 kubelet[1921]: I0510 00:49:04.035026 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-etc-cni-netd\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.035308 kubelet[1921]: I0510 00:49:04.035044 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-host-proc-sys-kernel\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.035308 kubelet[1921]: I0510 00:49:04.035056 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-hostproc\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.035308 kubelet[1921]: I0510 00:49:04.035070 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-run\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.035308 kubelet[1921]: I0510 00:49:04.035091 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfrnr\" (UniqueName: \"kubernetes.io/projected/2619dc35-572c-4737-96da-90c18e6c2b4c-kube-api-access-rfrnr\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.035517 kubelet[1921]: I0510 00:49:04.035106 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-bpf-maps\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.035517 kubelet[1921]: I0510 00:49:04.035121 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-lib-modules\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.035517 kubelet[1921]: I0510 00:49:04.035132 1921 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-cgroup\") pod \"2619dc35-572c-4737-96da-90c18e6c2b4c\" (UID: \"2619dc35-572c-4737-96da-90c18e6c2b4c\") " May 10 00:49:04.037496 kubelet[1921]: I0510 00:49:04.034964 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:04.037604 kubelet[1921]: I0510 00:49:04.035186 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:04.037674 kubelet[1921]: I0510 00:49:04.037648 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-cni-path" (OuterVolumeSpecName: "cni-path") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:04.037768 kubelet[1921]: I0510 00:49:04.035242 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:04.037858 kubelet[1921]: I0510 00:49:04.036942 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 10 00:49:04.037858 kubelet[1921]: I0510 00:49:04.036962 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-hostproc" (OuterVolumeSpecName: "hostproc") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:04.037858 kubelet[1921]: I0510 00:49:04.036970 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:04.037983 kubelet[1921]: I0510 00:49:04.037437 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2619dc35-572c-4737-96da-90c18e6c2b4c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:49:04.037983 kubelet[1921]: I0510 00:49:04.037463 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:04.037983 kubelet[1921]: I0510 00:49:04.037477 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:04.037983 kubelet[1921]: I0510 00:49:04.037614 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 10 00:49:04.037983 kubelet[1921]: I0510 00:49:04.037682 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:04.038095 kubelet[1921]: I0510 00:49:04.037698 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 10 00:49:04.039034 systemd[1]: var-lib-kubelet-pods-2619dc35\x2d572c\x2d4737\x2d96da\x2d90c18e6c2b4c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 10 00:49:04.039577 kubelet[1921]: I0510 00:49:04.039264 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2619dc35-572c-4737-96da-90c18e6c2b4c-kube-api-access-rfrnr" (OuterVolumeSpecName: "kube-api-access-rfrnr") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "kube-api-access-rfrnr". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:49:04.039577 kubelet[1921]: I0510 00:49:04.039292 1921 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2619dc35-572c-4737-96da-90c18e6c2b4c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2619dc35-572c-4737-96da-90c18e6c2b4c" (UID: "2619dc35-572c-4737-96da-90c18e6c2b4c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 10 00:49:04.039146 systemd[1]: var-lib-kubelet-pods-2619dc35\x2d572c\x2d4737\x2d96da\x2d90c18e6c2b4c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 10 00:49:04.039213 systemd[1]: var-lib-kubelet-pods-2619dc35\x2d572c\x2d4737\x2d96da\x2d90c18e6c2b4c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 10 00:49:04.041837 systemd[1]: var-lib-kubelet-pods-2619dc35\x2d572c\x2d4737\x2d96da\x2d90c18e6c2b4c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drfrnr.mount: Deactivated successfully. May 10 00:49:04.135793 kubelet[1921]: I0510 00:49:04.135731 1921 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2619dc35-572c-4737-96da-90c18e6c2b4c-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.135793 kubelet[1921]: I0510 00:49:04.135766 1921 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-cni-path\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.135793 kubelet[1921]: I0510 00:49:04.135774 1921 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.135793 kubelet[1921]: I0510 00:49:04.135781 1921 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.135793 kubelet[1921]: I0510 00:49:04.135790 1921 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.135793 kubelet[1921]: I0510 00:49:04.135797 1921 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-hostproc\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.135793 kubelet[1921]: I0510 00:49:04.135803 1921 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-run\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.135793 kubelet[1921]: I0510 00:49:04.135810 1921 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-lib-modules\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.136234 kubelet[1921]: I0510 00:49:04.135817 1921 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-rfrnr\" (UniqueName: \"kubernetes.io/projected/2619dc35-572c-4737-96da-90c18e6c2b4c-kube-api-access-rfrnr\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.136234 kubelet[1921]: I0510 00:49:04.135824 1921 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.136234 kubelet[1921]: I0510 00:49:04.135832 1921 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.136234 kubelet[1921]: I0510 00:49:04.135837 1921 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2619dc35-572c-4737-96da-90c18e6c2b4c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.136234 kubelet[1921]: I0510 00:49:04.135844 1921 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.136234 kubelet[1921]: I0510 00:49:04.135850 1921 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2619dc35-572c-4737-96da-90c18e6c2b4c-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.136234 kubelet[1921]: I0510 00:49:04.135856 1921 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2619dc35-572c-4737-96da-90c18e6c2b4c-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 10 00:49:04.721474 systemd[1]: Removed slice kubepods-burstable-pod2619dc35_572c_4737_96da_90c18e6c2b4c.slice. May 10 00:49:05.088080 systemd[1]: Created slice kubepods-burstable-podc65f0e4f_9288_4d2f_ad30_8a14ee1ec116.slice. May 10 00:49:05.140740 kubelet[1921]: I0510 00:49:05.140668 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-clustermesh-secrets\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.140740 kubelet[1921]: I0510 00:49:05.140717 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-cilium-cgroup\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.140740 kubelet[1921]: I0510 00:49:05.140730 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-lib-modules\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.140740 kubelet[1921]: I0510 00:49:05.140741 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-xtables-lock\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.141201 kubelet[1921]: I0510 00:49:05.140757 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-cilium-run\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.141201 kubelet[1921]: I0510 00:49:05.140770 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-hostproc\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.141201 kubelet[1921]: I0510 00:49:05.140784 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-cni-path\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.141201 kubelet[1921]: I0510 00:49:05.140799 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-host-proc-sys-kernel\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.141201 kubelet[1921]: I0510 00:49:05.140847 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-cilium-config-path\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.141201 kubelet[1921]: I0510 00:49:05.140863 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-hubble-tls\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.141338 kubelet[1921]: I0510 00:49:05.140894 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-bpf-maps\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.141338 kubelet[1921]: I0510 00:49:05.140962 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-etc-cni-netd\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.141338 kubelet[1921]: I0510 00:49:05.140999 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-cilium-ipsec-secrets\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.141338 kubelet[1921]: I0510 00:49:05.141014 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-host-proc-sys-net\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.141338 kubelet[1921]: I0510 00:49:05.141031 1921 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84x8j\" (UniqueName: \"kubernetes.io/projected/c65f0e4f-9288-4d2f-ad30-8a14ee1ec116-kube-api-access-84x8j\") pod \"cilium-6xb52\" (UID: \"c65f0e4f-9288-4d2f-ad30-8a14ee1ec116\") " pod="kube-system/cilium-6xb52" May 10 00:49:05.390727 kubelet[1921]: E0510 00:49:05.390600 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:05.391368 env[1215]: time="2025-05-10T00:49:05.391324831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xb52,Uid:c65f0e4f-9288-4d2f-ad30-8a14ee1ec116,Namespace:kube-system,Attempt:0,}" May 10 00:49:05.577105 env[1215]: time="2025-05-10T00:49:05.577019719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 10 00:49:05.577105 env[1215]: time="2025-05-10T00:49:05.577053524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 10 00:49:05.577105 env[1215]: time="2025-05-10T00:49:05.577062942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 10 00:49:05.577341 env[1215]: time="2025-05-10T00:49:05.577236873Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/853d60a1df4e0929b4b6bd3941cd1aa96848126cb96a14367531b88beb9d6436 pid=3762 runtime=io.containerd.runc.v2 May 10 00:49:05.587215 systemd[1]: Started cri-containerd-853d60a1df4e0929b4b6bd3941cd1aa96848126cb96a14367531b88beb9d6436.scope. May 10 00:49:05.608160 env[1215]: time="2025-05-10T00:49:05.608110266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6xb52,Uid:c65f0e4f-9288-4d2f-ad30-8a14ee1ec116,Namespace:kube-system,Attempt:0,} returns sandbox id \"853d60a1df4e0929b4b6bd3941cd1aa96848126cb96a14367531b88beb9d6436\"" May 10 00:49:05.609033 kubelet[1921]: E0510 00:49:05.608995 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:05.611762 env[1215]: time="2025-05-10T00:49:05.611721840Z" level=info msg="CreateContainer within sandbox \"853d60a1df4e0929b4b6bd3941cd1aa96848126cb96a14367531b88beb9d6436\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 10 00:49:05.627183 env[1215]: time="2025-05-10T00:49:05.627118261Z" level=info msg="CreateContainer within sandbox \"853d60a1df4e0929b4b6bd3941cd1aa96848126cb96a14367531b88beb9d6436\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6c873e2fd8b5bd75369120a0b7932d583091bbdace81381da3b130fd174007e6\"" May 10 00:49:05.628789 env[1215]: time="2025-05-10T00:49:05.628745403Z" level=info msg="StartContainer for \"6c873e2fd8b5bd75369120a0b7932d583091bbdace81381da3b130fd174007e6\"" May 10 00:49:05.645656 systemd[1]: Started cri-containerd-6c873e2fd8b5bd75369120a0b7932d583091bbdace81381da3b130fd174007e6.scope. May 10 00:49:05.669007 env[1215]: time="2025-05-10T00:49:05.668936735Z" level=info msg="StartContainer for \"6c873e2fd8b5bd75369120a0b7932d583091bbdace81381da3b130fd174007e6\" returns successfully" May 10 00:49:05.675776 systemd[1]: cri-containerd-6c873e2fd8b5bd75369120a0b7932d583091bbdace81381da3b130fd174007e6.scope: Deactivated successfully. May 10 00:49:05.708076 env[1215]: time="2025-05-10T00:49:05.707994363Z" level=info msg="shim disconnected" id=6c873e2fd8b5bd75369120a0b7932d583091bbdace81381da3b130fd174007e6 May 10 00:49:05.708076 env[1215]: time="2025-05-10T00:49:05.708057293Z" level=warning msg="cleaning up after shim disconnected" id=6c873e2fd8b5bd75369120a0b7932d583091bbdace81381da3b130fd174007e6 namespace=k8s.io May 10 00:49:05.708076 env[1215]: time="2025-05-10T00:49:05.708073534Z" level=info msg="cleaning up dead shim" May 10 00:49:05.714584 kubelet[1921]: E0510 00:49:05.714545 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:05.715390 env[1215]: time="2025-05-10T00:49:05.715310999Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3848 runtime=io.containerd.runc.v2\n" May 10 00:49:05.969311 kubelet[1921]: E0510 00:49:05.969270 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:05.971449 env[1215]: time="2025-05-10T00:49:05.971407886Z" level=info msg="CreateContainer within sandbox \"853d60a1df4e0929b4b6bd3941cd1aa96848126cb96a14367531b88beb9d6436\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 10 00:49:05.986396 env[1215]: time="2025-05-10T00:49:05.986314615Z" level=info msg="CreateContainer within sandbox \"853d60a1df4e0929b4b6bd3941cd1aa96848126cb96a14367531b88beb9d6436\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6d7b8ccdcf70c10c447266ec75a24a8d311ae91949bd1e6003dd62380f093e01\"" May 10 00:49:05.987234 env[1215]: time="2025-05-10T00:49:05.987184118Z" level=info msg="StartContainer for \"6d7b8ccdcf70c10c447266ec75a24a8d311ae91949bd1e6003dd62380f093e01\"" May 10 00:49:06.005695 systemd[1]: Started cri-containerd-6d7b8ccdcf70c10c447266ec75a24a8d311ae91949bd1e6003dd62380f093e01.scope. May 10 00:49:06.029579 env[1215]: time="2025-05-10T00:49:06.029530069Z" level=info msg="StartContainer for \"6d7b8ccdcf70c10c447266ec75a24a8d311ae91949bd1e6003dd62380f093e01\" returns successfully" May 10 00:49:06.034913 systemd[1]: cri-containerd-6d7b8ccdcf70c10c447266ec75a24a8d311ae91949bd1e6003dd62380f093e01.scope: Deactivated successfully. May 10 00:49:06.056476 env[1215]: time="2025-05-10T00:49:06.056419816Z" level=info msg="shim disconnected" id=6d7b8ccdcf70c10c447266ec75a24a8d311ae91949bd1e6003dd62380f093e01 May 10 00:49:06.056476 env[1215]: time="2025-05-10T00:49:06.056468568Z" level=warning msg="cleaning up after shim disconnected" id=6d7b8ccdcf70c10c447266ec75a24a8d311ae91949bd1e6003dd62380f093e01 namespace=k8s.io May 10 00:49:06.056476 env[1215]: time="2025-05-10T00:49:06.056477385Z" level=info msg="cleaning up dead shim" May 10 00:49:06.062462 env[1215]: time="2025-05-10T00:49:06.062402442Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3908 runtime=io.containerd.runc.v2\n" May 10 00:49:06.714604 kubelet[1921]: E0510 00:49:06.714570 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:06.716450 kubelet[1921]: I0510 00:49:06.716415 1921 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2619dc35-572c-4737-96da-90c18e6c2b4c" path="/var/lib/kubelet/pods/2619dc35-572c-4737-96da-90c18e6c2b4c/volumes" May 10 00:49:06.971958 kubelet[1921]: E0510 00:49:06.971839 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:06.973095 env[1215]: time="2025-05-10T00:49:06.973065239Z" level=info msg="CreateContainer within sandbox \"853d60a1df4e0929b4b6bd3941cd1aa96848126cb96a14367531b88beb9d6436\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 10 00:49:07.052139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3338661684.mount: Deactivated successfully. May 10 00:49:07.060443 env[1215]: time="2025-05-10T00:49:07.060332534Z" level=info msg="CreateContainer within sandbox \"853d60a1df4e0929b4b6bd3941cd1aa96848126cb96a14367531b88beb9d6436\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f62c5ab302ecd28d1fbba90067092cbfe7c2a94085371d0e6685e31d8ca3a2ce\"" May 10 00:49:07.061015 env[1215]: time="2025-05-10T00:49:07.060970225Z" level=info msg="StartContainer for \"f62c5ab302ecd28d1fbba90067092cbfe7c2a94085371d0e6685e31d8ca3a2ce\"" May 10 00:49:07.081264 systemd[1]: Started cri-containerd-f62c5ab302ecd28d1fbba90067092cbfe7c2a94085371d0e6685e31d8ca3a2ce.scope. May 10 00:49:07.110395 env[1215]: time="2025-05-10T00:49:07.110339682Z" level=info msg="StartContainer for \"f62c5ab302ecd28d1fbba90067092cbfe7c2a94085371d0e6685e31d8ca3a2ce\" returns successfully" May 10 00:49:07.111622 systemd[1]: cri-containerd-f62c5ab302ecd28d1fbba90067092cbfe7c2a94085371d0e6685e31d8ca3a2ce.scope: Deactivated successfully. May 10 00:49:07.199940 env[1215]: time="2025-05-10T00:49:07.199866293Z" level=info msg="shim disconnected" id=f62c5ab302ecd28d1fbba90067092cbfe7c2a94085371d0e6685e31d8ca3a2ce May 10 00:49:07.199940 env[1215]: time="2025-05-10T00:49:07.199946285Z" level=warning msg="cleaning up after shim disconnected" id=f62c5ab302ecd28d1fbba90067092cbfe7c2a94085371d0e6685e31d8ca3a2ce namespace=k8s.io May 10 00:49:07.200183 env[1215]: time="2025-05-10T00:49:07.199955642Z" level=info msg="cleaning up dead shim" May 10 00:49:07.206136 env[1215]: time="2025-05-10T00:49:07.206083821Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3967 runtime=io.containerd.runc.v2\n" May 10 00:49:07.245985 systemd[1]: run-containerd-runc-k8s.io-f62c5ab302ecd28d1fbba90067092cbfe7c2a94085371d0e6685e31d8ca3a2ce-runc.wPhRjL.mount: Deactivated successfully. May 10 00:49:07.246075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f62c5ab302ecd28d1fbba90067092cbfe7c2a94085371d0e6685e31d8ca3a2ce-rootfs.mount: Deactivated successfully. May 10 00:49:07.976148 kubelet[1921]: E0510 00:49:07.976107 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:07.978173 env[1215]: time="2025-05-10T00:49:07.978132407Z" level=info msg="CreateContainer within sandbox \"853d60a1df4e0929b4b6bd3941cd1aa96848126cb96a14367531b88beb9d6436\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 10 00:49:08.227122 env[1215]: time="2025-05-10T00:49:08.226939485Z" level=info msg="CreateContainer within sandbox \"853d60a1df4e0929b4b6bd3941cd1aa96848126cb96a14367531b88beb9d6436\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c495453552380a85558e21fbd13bd181d9a17009be952831daf13b30219d7849\"" May 10 00:49:08.227578 env[1215]: time="2025-05-10T00:49:08.227552379Z" level=info msg="StartContainer for \"c495453552380a85558e21fbd13bd181d9a17009be952831daf13b30219d7849\"" May 10 00:49:08.247341 systemd[1]: run-containerd-runc-k8s.io-c495453552380a85558e21fbd13bd181d9a17009be952831daf13b30219d7849-runc.tEQ4uz.mount: Deactivated successfully. May 10 00:49:08.249186 systemd[1]: Started cri-containerd-c495453552380a85558e21fbd13bd181d9a17009be952831daf13b30219d7849.scope. May 10 00:49:08.272930 systemd[1]: cri-containerd-c495453552380a85558e21fbd13bd181d9a17009be952831daf13b30219d7849.scope: Deactivated successfully. May 10 00:49:08.410150 env[1215]: time="2025-05-10T00:49:08.410091617Z" level=info msg="StartContainer for \"c495453552380a85558e21fbd13bd181d9a17009be952831daf13b30219d7849\" returns successfully" May 10 00:49:08.425631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c495453552380a85558e21fbd13bd181d9a17009be952831daf13b30219d7849-rootfs.mount: Deactivated successfully. May 10 00:49:08.538309 env[1215]: time="2025-05-10T00:49:08.538174597Z" level=info msg="shim disconnected" id=c495453552380a85558e21fbd13bd181d9a17009be952831daf13b30219d7849 May 10 00:49:08.538309 env[1215]: time="2025-05-10T00:49:08.538222638Z" level=warning msg="cleaning up after shim disconnected" id=c495453552380a85558e21fbd13bd181d9a17009be952831daf13b30219d7849 namespace=k8s.io May 10 00:49:08.538309 env[1215]: time="2025-05-10T00:49:08.538232246Z" level=info msg="cleaning up dead shim" May 10 00:49:08.544902 env[1215]: time="2025-05-10T00:49:08.544819074Z" level=warning msg="cleanup warnings time=\"2025-05-10T00:49:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4020 runtime=io.containerd.runc.v2\n" May 10 00:49:08.759972 kubelet[1921]: E0510 00:49:08.759908 1921 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 10 00:49:08.979914 kubelet[1921]: E0510 00:49:08.979873 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:08.981732 env[1215]: time="2025-05-10T00:49:08.981689477Z" level=info msg="CreateContainer within sandbox \"853d60a1df4e0929b4b6bd3941cd1aa96848126cb96a14367531b88beb9d6436\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 10 00:49:08.995614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2645252526.mount: Deactivated successfully. May 10 00:49:09.172653 env[1215]: time="2025-05-10T00:49:09.172578379Z" level=info msg="CreateContainer within sandbox \"853d60a1df4e0929b4b6bd3941cd1aa96848126cb96a14367531b88beb9d6436\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"41c73883e656bd70b03ca318892295b651f24eb7f2fd6cdad96dcff8cdc00181\"" May 10 00:49:09.173226 env[1215]: time="2025-05-10T00:49:09.173199578Z" level=info msg="StartContainer for \"41c73883e656bd70b03ca318892295b651f24eb7f2fd6cdad96dcff8cdc00181\"" May 10 00:49:09.187447 systemd[1]: Started cri-containerd-41c73883e656bd70b03ca318892295b651f24eb7f2fd6cdad96dcff8cdc00181.scope. May 10 00:49:09.215503 env[1215]: time="2025-05-10T00:49:09.215427856Z" level=info msg="StartContainer for \"41c73883e656bd70b03ca318892295b651f24eb7f2fd6cdad96dcff8cdc00181\" returns successfully" May 10 00:49:09.476929 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) May 10 00:49:09.984435 kubelet[1921]: E0510 00:49:09.984402 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:09.996968 kubelet[1921]: I0510 00:49:09.996907 1921 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6xb52" podStartSLOduration=4.996871226 podStartE2EDuration="4.996871226s" podCreationTimestamp="2025-05-10 00:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-10 00:49:09.99623077 +0000 UTC m=+91.396547966" watchObservedRunningTime="2025-05-10 00:49:09.996871226 +0000 UTC m=+91.397188402" May 10 00:49:10.714575 kubelet[1921]: E0510 00:49:10.714539 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:11.391165 kubelet[1921]: E0510 00:49:11.391128 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:11.697348 kubelet[1921]: I0510 00:49:11.697290 1921 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-10T00:49:11Z","lastTransitionTime":"2025-05-10T00:49:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 10 00:49:12.064775 systemd-networkd[1037]: lxc_health: Link UP May 10 00:49:12.073181 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 10 00:49:12.072978 systemd-networkd[1037]: lxc_health: Gained carrier May 10 00:49:12.184639 systemd[1]: run-containerd-runc-k8s.io-41c73883e656bd70b03ca318892295b651f24eb7f2fd6cdad96dcff8cdc00181-runc.POn2nT.mount: Deactivated successfully. May 10 00:49:13.375069 systemd-networkd[1037]: lxc_health: Gained IPv6LL May 10 00:49:13.394915 kubelet[1921]: E0510 00:49:13.392628 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:13.994593 kubelet[1921]: E0510 00:49:13.994552 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:14.996714 kubelet[1921]: E0510 00:49:14.996678 1921 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 10 00:49:18.491740 sshd[3733]: pam_unix(sshd:session): session closed for user core May 10 00:49:18.494168 systemd[1]: sshd@25-10.0.0.133:22-10.0.0.1:49470.service: Deactivated successfully. May 10 00:49:18.494839 systemd[1]: session-26.scope: Deactivated successfully. May 10 00:49:18.495402 systemd-logind[1204]: Session 26 logged out. Waiting for processes to exit. May 10 00:49:18.496104 systemd-logind[1204]: Removed session 26.