Feb 12 20:19:43.809378 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Feb 12 18:05:31 -00 2024 Feb 12 20:19:43.809397 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:19:43.809405 kernel: BIOS-provided physical RAM map: Feb 12 20:19:43.809410 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 12 20:19:43.809415 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 12 20:19:43.809421 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 12 20:19:43.809427 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Feb 12 20:19:43.809433 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Feb 12 20:19:43.809440 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 12 20:19:43.809445 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 12 20:19:43.809451 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 12 20:19:43.809462 kernel: NX (Execute Disable) protection: active Feb 12 20:19:43.809468 kernel: SMBIOS 2.8 present. Feb 12 20:19:43.809474 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 12 20:19:43.809482 kernel: Hypervisor detected: KVM Feb 12 20:19:43.809488 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 12 20:19:43.809494 kernel: kvm-clock: cpu 0, msr 87faa001, primary cpu clock Feb 12 20:19:43.809500 kernel: kvm-clock: using sched offset of 2280612306 cycles Feb 12 20:19:43.809506 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 12 20:19:43.809512 kernel: tsc: Detected 2794.748 MHz processor Feb 12 20:19:43.809518 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 12 20:19:43.809525 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 12 20:19:43.809531 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Feb 12 20:19:43.809538 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 12 20:19:43.809544 kernel: Using GB pages for direct mapping Feb 12 20:19:43.809550 kernel: ACPI: Early table checksum verification disabled Feb 12 20:19:43.809556 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Feb 12 20:19:43.809562 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:19:43.809568 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:19:43.809574 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:19:43.809580 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 12 20:19:43.809586 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:19:43.809593 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:19:43.809599 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 12 20:19:43.809605 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Feb 12 20:19:43.809611 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Feb 12 20:19:43.809617 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 12 20:19:43.809623 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Feb 12 20:19:43.809629 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Feb 12 20:19:43.809635 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Feb 12 20:19:43.809645 kernel: No NUMA configuration found Feb 12 20:19:43.809651 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Feb 12 20:19:43.809658 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Feb 12 20:19:43.809664 kernel: Zone ranges: Feb 12 20:19:43.809671 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 12 20:19:43.809677 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Feb 12 20:19:43.809685 kernel: Normal empty Feb 12 20:19:43.809691 kernel: Movable zone start for each node Feb 12 20:19:43.809697 kernel: Early memory node ranges Feb 12 20:19:43.809704 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 12 20:19:43.809710 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Feb 12 20:19:43.809717 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Feb 12 20:19:43.809723 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 12 20:19:43.809729 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 12 20:19:43.809736 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Feb 12 20:19:43.809743 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 12 20:19:43.809750 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 12 20:19:43.809756 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 12 20:19:43.809763 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 12 20:19:43.809769 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 12 20:19:43.809776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 12 20:19:43.809782 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 12 20:19:43.809789 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 12 20:19:43.809795 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 12 20:19:43.809803 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 12 20:19:43.809809 kernel: TSC deadline timer available Feb 12 20:19:43.809815 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 12 20:19:43.809822 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 12 20:19:43.809828 kernel: kvm-guest: setup PV sched yield Feb 12 20:19:43.809834 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Feb 12 20:19:43.809841 kernel: Booting paravirtualized kernel on KVM Feb 12 20:19:43.809848 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 12 20:19:43.809854 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 12 20:19:43.809861 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 12 20:19:43.809868 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 12 20:19:43.809875 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 12 20:19:43.809881 kernel: kvm-guest: setup async PF for cpu 0 Feb 12 20:19:43.809887 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Feb 12 20:19:43.809894 kernel: kvm-guest: PV spinlocks enabled Feb 12 20:19:43.809900 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 12 20:19:43.809906 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Feb 12 20:19:43.809913 kernel: Policy zone: DMA32 Feb 12 20:19:43.809920 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:19:43.809928 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:19:43.809935 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 20:19:43.809942 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:19:43.809948 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:19:43.809955 kernel: Memory: 2438768K/2571756K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 132728K reserved, 0K cma-reserved) Feb 12 20:19:43.809962 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 12 20:19:43.809968 kernel: ftrace: allocating 34475 entries in 135 pages Feb 12 20:19:43.809974 kernel: ftrace: allocated 135 pages with 4 groups Feb 12 20:19:43.809995 kernel: rcu: Hierarchical RCU implementation. Feb 12 20:19:43.810003 kernel: rcu: RCU event tracing is enabled. Feb 12 20:19:43.810009 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 12 20:19:43.810016 kernel: Rude variant of Tasks RCU enabled. Feb 12 20:19:43.810023 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:19:43.810029 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:19:43.810036 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 12 20:19:43.810042 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 12 20:19:43.810049 kernel: random: crng init done Feb 12 20:19:43.810056 kernel: Console: colour VGA+ 80x25 Feb 12 20:19:43.810063 kernel: printk: console [ttyS0] enabled Feb 12 20:19:43.810069 kernel: ACPI: Core revision 20210730 Feb 12 20:19:43.810076 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 12 20:19:43.810083 kernel: APIC: Switch to symmetric I/O mode setup Feb 12 20:19:43.810089 kernel: x2apic enabled Feb 12 20:19:43.810095 kernel: Switched APIC routing to physical x2apic. Feb 12 20:19:43.810102 kernel: kvm-guest: setup PV IPIs Feb 12 20:19:43.810108 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 12 20:19:43.810116 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 12 20:19:43.810123 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 12 20:19:43.810129 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 12 20:19:43.810136 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 12 20:19:43.810142 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 12 20:19:43.810149 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 12 20:19:43.810155 kernel: Spectre V2 : Mitigation: Retpolines Feb 12 20:19:43.810162 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 12 20:19:43.810169 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 12 20:19:43.810181 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 12 20:19:43.810188 kernel: RETBleed: Mitigation: untrained return thunk Feb 12 20:19:43.810195 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 12 20:19:43.810203 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 12 20:19:43.810210 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 12 20:19:43.810217 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 12 20:19:43.810224 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 12 20:19:43.810231 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 12 20:19:43.810238 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 12 20:19:43.810246 kernel: Freeing SMP alternatives memory: 32K Feb 12 20:19:43.810253 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:19:43.810260 kernel: LSM: Security Framework initializing Feb 12 20:19:43.810266 kernel: SELinux: Initializing. Feb 12 20:19:43.810273 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:19:43.810280 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:19:43.810287 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 12 20:19:43.810295 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 12 20:19:43.810302 kernel: ... version: 0 Feb 12 20:19:43.810309 kernel: ... bit width: 48 Feb 12 20:19:43.810315 kernel: ... generic registers: 6 Feb 12 20:19:43.810322 kernel: ... value mask: 0000ffffffffffff Feb 12 20:19:43.810329 kernel: ... max period: 00007fffffffffff Feb 12 20:19:43.810336 kernel: ... fixed-purpose events: 0 Feb 12 20:19:43.810343 kernel: ... event mask: 000000000000003f Feb 12 20:19:43.810349 kernel: signal: max sigframe size: 1776 Feb 12 20:19:43.810357 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:19:43.810364 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:19:43.810371 kernel: x86: Booting SMP configuration: Feb 12 20:19:43.810377 kernel: .... node #0, CPUs: #1 Feb 12 20:19:43.810384 kernel: kvm-clock: cpu 1, msr 87faa041, secondary cpu clock Feb 12 20:19:43.810391 kernel: kvm-guest: setup async PF for cpu 1 Feb 12 20:19:43.810398 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Feb 12 20:19:43.810405 kernel: #2 Feb 12 20:19:43.810412 kernel: kvm-clock: cpu 2, msr 87faa081, secondary cpu clock Feb 12 20:19:43.810418 kernel: kvm-guest: setup async PF for cpu 2 Feb 12 20:19:43.810426 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Feb 12 20:19:43.810433 kernel: #3 Feb 12 20:19:43.810440 kernel: kvm-clock: cpu 3, msr 87faa0c1, secondary cpu clock Feb 12 20:19:43.810446 kernel: kvm-guest: setup async PF for cpu 3 Feb 12 20:19:43.810458 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Feb 12 20:19:43.810465 kernel: smp: Brought up 1 node, 4 CPUs Feb 12 20:19:43.810472 kernel: smpboot: Max logical packages: 1 Feb 12 20:19:43.810478 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 12 20:19:43.810485 kernel: devtmpfs: initialized Feb 12 20:19:43.810493 kernel: x86/mm: Memory block size: 128MB Feb 12 20:19:43.810500 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:19:43.810507 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 12 20:19:43.810514 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:19:43.810521 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:19:43.810527 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:19:43.810547 kernel: audit: type=2000 audit(1707769183.429:1): state=initialized audit_enabled=0 res=1 Feb 12 20:19:43.810554 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:19:43.810561 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 12 20:19:43.810569 kernel: cpuidle: using governor menu Feb 12 20:19:43.810576 kernel: ACPI: bus type PCI registered Feb 12 20:19:43.810582 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:19:43.810589 kernel: dca service started, version 1.12.1 Feb 12 20:19:43.810596 kernel: PCI: Using configuration type 1 for base access Feb 12 20:19:43.810603 kernel: PCI: Using configuration type 1 for extended access Feb 12 20:19:43.810610 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 12 20:19:43.810616 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 20:19:43.810623 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:19:43.810631 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:19:43.810638 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:19:43.810645 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:19:43.810652 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:19:43.810659 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:19:43.810665 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:19:43.810672 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:19:43.810679 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:19:43.810686 kernel: ACPI: Interpreter enabled Feb 12 20:19:43.810693 kernel: ACPI: PM: (supports S0 S3 S5) Feb 12 20:19:43.810700 kernel: ACPI: Using IOAPIC for interrupt routing Feb 12 20:19:43.810707 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 12 20:19:43.810714 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 12 20:19:43.810721 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 12 20:19:43.810834 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:19:43.810846 kernel: acpiphp: Slot [3] registered Feb 12 20:19:43.810853 kernel: acpiphp: Slot [4] registered Feb 12 20:19:43.810861 kernel: acpiphp: Slot [5] registered Feb 12 20:19:43.810875 kernel: acpiphp: Slot [6] registered Feb 12 20:19:43.810882 kernel: acpiphp: Slot [7] registered Feb 12 20:19:43.810889 kernel: acpiphp: Slot [8] registered Feb 12 20:19:43.810896 kernel: acpiphp: Slot [9] registered Feb 12 20:19:43.810903 kernel: acpiphp: Slot [10] registered Feb 12 20:19:43.810909 kernel: acpiphp: Slot [11] registered Feb 12 20:19:43.810916 kernel: acpiphp: Slot [12] registered Feb 12 20:19:43.810923 kernel: acpiphp: Slot [13] registered Feb 12 20:19:43.810929 kernel: acpiphp: Slot [14] registered Feb 12 20:19:43.810938 kernel: acpiphp: Slot [15] registered Feb 12 20:19:43.810944 kernel: acpiphp: Slot [16] registered Feb 12 20:19:43.810951 kernel: acpiphp: Slot [17] registered Feb 12 20:19:43.810958 kernel: acpiphp: Slot [18] registered Feb 12 20:19:43.810964 kernel: acpiphp: Slot [19] registered Feb 12 20:19:43.810971 kernel: acpiphp: Slot [20] registered Feb 12 20:19:43.810978 kernel: acpiphp: Slot [21] registered Feb 12 20:19:43.810993 kernel: acpiphp: Slot [22] registered Feb 12 20:19:43.811000 kernel: acpiphp: Slot [23] registered Feb 12 20:19:43.811008 kernel: acpiphp: Slot [24] registered Feb 12 20:19:43.811015 kernel: acpiphp: Slot [25] registered Feb 12 20:19:43.811021 kernel: acpiphp: Slot [26] registered Feb 12 20:19:43.811028 kernel: acpiphp: Slot [27] registered Feb 12 20:19:43.811035 kernel: acpiphp: Slot [28] registered Feb 12 20:19:43.811041 kernel: acpiphp: Slot [29] registered Feb 12 20:19:43.811048 kernel: acpiphp: Slot [30] registered Feb 12 20:19:43.811055 kernel: acpiphp: Slot [31] registered Feb 12 20:19:43.811062 kernel: PCI host bridge to bus 0000:00 Feb 12 20:19:43.811148 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 12 20:19:43.811216 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 12 20:19:43.811278 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 12 20:19:43.811338 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 12 20:19:43.811398 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 12 20:19:43.811467 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 12 20:19:43.811549 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 12 20:19:43.811634 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 12 20:19:43.811711 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 12 20:19:43.811780 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 12 20:19:43.811847 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 12 20:19:43.811912 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 12 20:19:43.811979 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 12 20:19:43.812059 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 12 20:19:43.812139 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 12 20:19:43.812207 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 12 20:19:43.812275 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 12 20:19:43.812352 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 12 20:19:43.812432 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 12 20:19:43.812509 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 12 20:19:43.812581 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 12 20:19:43.812648 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 12 20:19:43.812726 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 12 20:19:43.812795 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Feb 12 20:19:43.812867 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 12 20:19:43.812935 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 12 20:19:43.813024 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 12 20:19:43.813098 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 12 20:19:43.813171 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 12 20:19:43.813244 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 12 20:19:43.813325 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 12 20:19:43.813393 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 12 20:19:43.813469 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 12 20:19:43.813540 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 12 20:19:43.813612 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 12 20:19:43.813621 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 12 20:19:43.813628 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 12 20:19:43.813635 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 12 20:19:43.813642 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 12 20:19:43.813649 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 12 20:19:43.813656 kernel: iommu: Default domain type: Translated Feb 12 20:19:43.813662 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 12 20:19:43.813728 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 12 20:19:43.813799 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 12 20:19:43.813865 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 12 20:19:43.813874 kernel: vgaarb: loaded Feb 12 20:19:43.813881 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:19:43.813888 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:19:43.813895 kernel: PTP clock support registered Feb 12 20:19:43.813902 kernel: PCI: Using ACPI for IRQ routing Feb 12 20:19:43.813909 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 12 20:19:43.813917 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 12 20:19:43.813924 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Feb 12 20:19:43.813931 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 12 20:19:43.813938 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 12 20:19:43.813944 kernel: clocksource: Switched to clocksource kvm-clock Feb 12 20:19:43.813951 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:19:43.813958 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:19:43.813965 kernel: pnp: PnP ACPI init Feb 12 20:19:43.814049 kernel: pnp 00:02: [dma 2] Feb 12 20:19:43.814063 kernel: pnp: PnP ACPI: found 6 devices Feb 12 20:19:43.814070 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 12 20:19:43.814077 kernel: NET: Registered PF_INET protocol family Feb 12 20:19:43.814084 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 20:19:43.814091 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 20:19:43.814098 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:19:43.814105 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:19:43.814112 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 20:19:43.814120 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 20:19:43.814127 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:19:43.814134 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:19:43.814140 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:19:43.814147 kernel: NET: Registered PF_XDP protocol family Feb 12 20:19:43.814213 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 12 20:19:43.814274 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 12 20:19:43.814332 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 12 20:19:43.814391 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 12 20:19:43.814459 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 12 20:19:43.814537 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 12 20:19:43.814606 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 12 20:19:43.814674 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 12 20:19:43.814683 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:19:43.814690 kernel: Initialise system trusted keyrings Feb 12 20:19:43.814697 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 20:19:43.814704 kernel: Key type asymmetric registered Feb 12 20:19:43.814713 kernel: Asymmetric key parser 'x509' registered Feb 12 20:19:43.814720 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:19:43.814727 kernel: io scheduler mq-deadline registered Feb 12 20:19:43.814734 kernel: io scheduler kyber registered Feb 12 20:19:43.814741 kernel: io scheduler bfq registered Feb 12 20:19:43.814747 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 12 20:19:43.814755 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 12 20:19:43.814761 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 12 20:19:43.814768 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 12 20:19:43.814776 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:19:43.814783 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 12 20:19:43.814790 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 12 20:19:43.814797 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 12 20:19:43.814804 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 12 20:19:43.814875 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 12 20:19:43.814885 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 12 20:19:43.814955 kernel: rtc_cmos 00:05: registered as rtc0 Feb 12 20:19:43.815032 kernel: rtc_cmos 00:05: setting system clock to 2024-02-12T20:19:43 UTC (1707769183) Feb 12 20:19:43.815095 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 12 20:19:43.815103 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:19:43.815110 kernel: Segment Routing with IPv6 Feb 12 20:19:43.815117 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:19:43.815124 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:19:43.815131 kernel: Key type dns_resolver registered Feb 12 20:19:43.815138 kernel: IPI shorthand broadcast: enabled Feb 12 20:19:43.815145 kernel: sched_clock: Marking stable (384002170, 72370684)->(495016861, -38644007) Feb 12 20:19:43.815154 kernel: registered taskstats version 1 Feb 12 20:19:43.815161 kernel: Loading compiled-in X.509 certificates Feb 12 20:19:43.815168 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 253e5c5c936b12e2ff2626e7f3214deb753330c8' Feb 12 20:19:43.815175 kernel: Key type .fscrypt registered Feb 12 20:19:43.815182 kernel: Key type fscrypt-provisioning registered Feb 12 20:19:43.815189 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:19:43.815195 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:19:43.815202 kernel: ima: No architecture policies found Feb 12 20:19:43.815210 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 12 20:19:43.815217 kernel: Write protecting the kernel read-only data: 28672k Feb 12 20:19:43.815224 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 12 20:19:43.815231 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 12 20:19:43.815238 kernel: Run /init as init process Feb 12 20:19:43.815244 kernel: with arguments: Feb 12 20:19:43.815251 kernel: /init Feb 12 20:19:43.815258 kernel: with environment: Feb 12 20:19:43.815275 kernel: HOME=/ Feb 12 20:19:43.815286 kernel: TERM=linux Feb 12 20:19:43.815303 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:19:43.815316 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:19:43.815327 systemd[1]: Detected virtualization kvm. Feb 12 20:19:43.815336 systemd[1]: Detected architecture x86-64. Feb 12 20:19:43.815345 systemd[1]: Running in initrd. Feb 12 20:19:43.815354 systemd[1]: No hostname configured, using default hostname. Feb 12 20:19:43.815363 systemd[1]: Hostname set to . Feb 12 20:19:43.815376 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:19:43.815384 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:19:43.815391 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:19:43.815398 systemd[1]: Reached target cryptsetup.target. Feb 12 20:19:43.815406 systemd[1]: Reached target paths.target. Feb 12 20:19:43.815413 systemd[1]: Reached target slices.target. Feb 12 20:19:43.815420 systemd[1]: Reached target swap.target. Feb 12 20:19:43.815428 systemd[1]: Reached target timers.target. Feb 12 20:19:43.815437 systemd[1]: Listening on iscsid.socket. Feb 12 20:19:43.815444 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:19:43.815460 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:19:43.815467 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:19:43.815475 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:19:43.815483 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:19:43.815493 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:19:43.815500 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:19:43.815509 systemd[1]: Reached target sockets.target. Feb 12 20:19:43.815517 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:19:43.815524 systemd[1]: Finished network-cleanup.service. Feb 12 20:19:43.815532 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:19:43.815539 systemd[1]: Starting systemd-journald.service... Feb 12 20:19:43.815547 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:19:43.815555 systemd[1]: Starting systemd-resolved.service... Feb 12 20:19:43.815563 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:19:43.815570 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:19:43.815578 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:19:43.815586 kernel: audit: type=1130 audit(1707769183.808:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.815595 systemd-journald[198]: Journal started Feb 12 20:19:43.815636 systemd-journald[198]: Runtime Journal (/run/log/journal/178d5f6c81314991824b145f5094d5f1) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:19:43.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.827005 systemd[1]: Started systemd-journald.service. Feb 12 20:19:43.827965 systemd-modules-load[199]: Inserted module 'overlay' Feb 12 20:19:43.865592 kernel: audit: type=1130 audit(1707769183.828:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.865616 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:19:43.865627 kernel: Bridge firewalling registered Feb 12 20:19:43.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.830094 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:19:43.868274 kernel: audit: type=1130 audit(1707769183.865:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.868297 kernel: SCSI subsystem initialized Feb 12 20:19:43.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.837728 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:19:43.871789 kernel: audit: type=1130 audit(1707769183.869:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.838009 systemd-resolved[200]: Positive Trust Anchors: Feb 12 20:19:43.874550 kernel: audit: type=1130 audit(1707769183.872:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.838020 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:19:43.838046 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:19:43.840143 systemd-resolved[200]: Defaulting to hostname 'linux'. Feb 12 20:19:43.849959 systemd-modules-load[199]: Inserted module 'br_netfilter' Feb 12 20:19:43.867143 systemd[1]: Started systemd-resolved.service. Feb 12 20:19:43.869428 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:19:43.872477 systemd[1]: Reached target nss-lookup.target. Feb 12 20:19:43.876087 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:19:43.892209 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:19:43.897841 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:19:43.897862 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:19:43.897878 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:19:43.897887 kernel: audit: type=1130 audit(1707769183.894:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.895246 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:19:43.901446 systemd-modules-load[199]: Inserted module 'dm_multipath' Feb 12 20:19:43.902868 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:19:43.913961 kernel: audit: type=1130 audit(1707769183.910:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.914561 dracut-cmdline[216]: dracut-dracut-053 Feb 12 20:19:43.914561 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=f2beb0668e3dab90bbcf0ace3803b7ee02142bfb86913ef12ef6d2ee81a411a4 Feb 12 20:19:43.913389 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:19:43.920142 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:19:43.923145 kernel: audit: type=1130 audit(1707769183.919:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:43.968009 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:19:43.978008 kernel: iscsi: registered transport (tcp) Feb 12 20:19:44.006019 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:19:44.006062 kernel: QLogic iSCSI HBA Driver Feb 12 20:19:44.034048 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:19:44.037548 kernel: audit: type=1130 audit(1707769184.033:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:44.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:44.034948 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:19:44.080015 kernel: raid6: avx2x4 gen() 30714 MB/s Feb 12 20:19:44.097021 kernel: raid6: avx2x4 xor() 7590 MB/s Feb 12 20:19:44.114011 kernel: raid6: avx2x2 gen() 31825 MB/s Feb 12 20:19:44.131010 kernel: raid6: avx2x2 xor() 18926 MB/s Feb 12 20:19:44.148010 kernel: raid6: avx2x1 gen() 26315 MB/s Feb 12 20:19:44.165011 kernel: raid6: avx2x1 xor() 15199 MB/s Feb 12 20:19:44.182011 kernel: raid6: sse2x4 gen() 14737 MB/s Feb 12 20:19:44.199007 kernel: raid6: sse2x4 xor() 7280 MB/s Feb 12 20:19:44.216010 kernel: raid6: sse2x2 gen() 16176 MB/s Feb 12 20:19:44.233005 kernel: raid6: sse2x2 xor() 9780 MB/s Feb 12 20:19:44.264016 kernel: raid6: sse2x1 gen() 12068 MB/s Feb 12 20:19:44.281458 kernel: raid6: sse2x1 xor() 7749 MB/s Feb 12 20:19:44.281477 kernel: raid6: using algorithm avx2x2 gen() 31825 MB/s Feb 12 20:19:44.281486 kernel: raid6: .... xor() 18926 MB/s, rmw enabled Feb 12 20:19:44.281495 kernel: raid6: using avx2x2 recovery algorithm Feb 12 20:19:44.294016 kernel: xor: automatically using best checksumming function avx Feb 12 20:19:44.386018 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 12 20:19:44.394432 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:19:44.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:44.394000 audit: BPF prog-id=7 op=LOAD Feb 12 20:19:44.395000 audit: BPF prog-id=8 op=LOAD Feb 12 20:19:44.396387 systemd[1]: Starting systemd-udevd.service... Feb 12 20:19:44.407867 systemd-udevd[400]: Using default interface naming scheme 'v252'. Feb 12 20:19:44.411603 systemd[1]: Started systemd-udevd.service. Feb 12 20:19:44.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:44.413870 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:19:44.422890 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Feb 12 20:19:44.445962 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:19:44.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:44.447251 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:19:44.481595 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:19:44.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:44.510090 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 12 20:19:44.518559 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:19:44.518588 kernel: GPT:9289727 != 19775487 Feb 12 20:19:44.518601 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:19:44.518611 kernel: GPT:9289727 != 19775487 Feb 12 20:19:44.519401 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:19:44.519425 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:19:44.526009 kernel: libata version 3.00 loaded. Feb 12 20:19:44.529015 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 12 20:19:44.536947 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:19:44.536960 kernel: scsi host0: ata_piix Feb 12 20:19:44.537087 kernel: scsi host1: ata_piix Feb 12 20:19:44.537177 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 12 20:19:44.537187 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 12 20:19:44.554636 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:19:44.570596 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) Feb 12 20:19:44.569235 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:19:44.573328 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:19:44.579312 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:19:44.587164 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:19:44.588396 systemd[1]: Starting disk-uuid.service... Feb 12 20:19:44.628694 disk-uuid[471]: Primary Header is updated. Feb 12 20:19:44.628694 disk-uuid[471]: Secondary Entries is updated. Feb 12 20:19:44.628694 disk-uuid[471]: Secondary Header is updated. Feb 12 20:19:44.631556 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:19:44.634010 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:19:44.696007 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 12 20:19:44.700001 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 12 20:19:44.707014 kernel: AVX2 version of gcm_enc/dec engaged. Feb 12 20:19:44.707065 kernel: AES CTR mode by8 optimization enabled Feb 12 20:19:44.731021 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 12 20:19:44.731195 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 12 20:19:44.748019 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 12 20:19:45.636456 disk-uuid[472]: The operation has completed successfully. Feb 12 20:19:45.637311 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 12 20:19:45.657853 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:19:45.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.657931 systemd[1]: Finished disk-uuid.service. Feb 12 20:19:45.661998 systemd[1]: Starting verity-setup.service... Feb 12 20:19:45.673017 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 12 20:19:45.689619 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:19:45.690186 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:19:45.691202 systemd[1]: Finished verity-setup.service. Feb 12 20:19:45.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.745799 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:19:45.746811 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:19:45.746937 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:19:45.747792 systemd[1]: Starting ignition-setup.service... Feb 12 20:19:45.749025 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:19:45.755335 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:19:45.755362 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:19:45.755375 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:19:45.762921 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:19:45.772279 systemd[1]: Finished ignition-setup.service. Feb 12 20:19:45.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.773177 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:19:45.808506 ignition[627]: Ignition 2.14.0 Feb 12 20:19:45.808821 ignition[627]: Stage: fetch-offline Feb 12 20:19:45.808861 ignition[627]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:19:45.808868 ignition[627]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:19:45.809374 ignition[627]: parsed url from cmdline: "" Feb 12 20:19:45.809378 ignition[627]: no config URL provided Feb 12 20:19:45.809383 ignition[627]: reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:19:45.809391 ignition[627]: no config at "/usr/lib/ignition/user.ign" Feb 12 20:19:45.809417 ignition[627]: op(1): [started] loading QEMU firmware config module Feb 12 20:19:45.809422 ignition[627]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 12 20:19:45.815694 ignition[627]: op(1): [finished] loading QEMU firmware config module Feb 12 20:19:45.818461 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:19:45.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.819000 audit: BPF prog-id=9 op=LOAD Feb 12 20:19:45.820647 systemd[1]: Starting systemd-networkd.service... Feb 12 20:19:45.873333 ignition[627]: parsing config with SHA512: 940594678f776ceb7fad7252d66be80dc5a149f3cb29073d35d6582483467b8418a10bd6507bed79d269d856a795190a099b0d2ffb2c9c3362e33a33a5452582 Feb 12 20:19:45.893786 systemd-networkd[708]: lo: Link UP Feb 12 20:19:45.893796 systemd-networkd[708]: lo: Gained carrier Feb 12 20:19:45.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.894270 systemd-networkd[708]: Enumeration completed Feb 12 20:19:45.894357 systemd[1]: Started systemd-networkd.service. Feb 12 20:19:45.894497 systemd-networkd[708]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:19:45.895668 systemd[1]: Reached target network.target. Feb 12 20:19:45.895900 systemd-networkd[708]: eth0: Link UP Feb 12 20:19:45.895904 systemd-networkd[708]: eth0: Gained carrier Feb 12 20:19:45.897828 systemd[1]: Starting iscsiuio.service... Feb 12 20:19:45.902543 systemd[1]: Started iscsiuio.service. Feb 12 20:19:45.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.904436 systemd[1]: Starting iscsid.service... Feb 12 20:19:45.908262 iscsid[713]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:19:45.908262 iscsid[713]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:19:45.908262 iscsid[713]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:19:45.908262 iscsid[713]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:19:45.908262 iscsid[713]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:19:45.908262 iscsid[713]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:19:45.915541 unknown[627]: fetched base config from "system" Feb 12 20:19:45.915549 unknown[627]: fetched user config from "qemu" Feb 12 20:19:45.916209 ignition[627]: fetch-offline: fetch-offline passed Feb 12 20:19:45.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.917426 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:19:45.916290 ignition[627]: Ignition finished successfully Feb 12 20:19:45.918162 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 12 20:19:45.918864 systemd[1]: Starting ignition-kargs.service... Feb 12 20:19:45.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.919957 systemd[1]: Started iscsid.service. Feb 12 20:19:45.920069 systemd-networkd[708]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:19:45.923229 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:19:45.929479 ignition[715]: Ignition 2.14.0 Feb 12 20:19:45.929489 ignition[715]: Stage: kargs Feb 12 20:19:45.929597 ignition[715]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:19:45.929609 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:19:45.931091 ignition[715]: kargs: kargs passed Feb 12 20:19:45.931130 ignition[715]: Ignition finished successfully Feb 12 20:19:45.932542 systemd[1]: Finished ignition-kargs.service. Feb 12 20:19:45.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.934151 systemd[1]: Starting ignition-disks.service... Feb 12 20:19:45.940948 ignition[722]: Ignition 2.14.0 Feb 12 20:19:45.940958 ignition[722]: Stage: disks Feb 12 20:19:45.941076 ignition[722]: no configs at "/usr/lib/ignition/base.d" Feb 12 20:19:45.941086 ignition[722]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:19:45.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.942184 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:19:45.943448 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:19:45.944722 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:19:45.945746 systemd[1]: Reached target remote-fs.target. Feb 12 20:19:45.947425 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:19:45.949433 ignition[722]: disks: disks passed Feb 12 20:19:45.949475 ignition[722]: Ignition finished successfully Feb 12 20:19:45.951194 systemd[1]: Finished ignition-disks.service. Feb 12 20:19:45.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.951338 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:19:45.953460 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:19:45.954555 systemd[1]: Reached target local-fs.target. Feb 12 20:19:45.955579 systemd[1]: Reached target sysinit.target. Feb 12 20:19:45.956578 systemd[1]: Reached target basic.target. Feb 12 20:19:45.957829 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:19:45.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.959628 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:19:45.984570 systemd-fsck[742]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 12 20:19:45.997918 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:19:45.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:45.999154 systemd[1]: Mounting sysroot.mount... Feb 12 20:19:46.005006 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:19:46.005017 systemd[1]: Mounted sysroot.mount. Feb 12 20:19:46.005175 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:19:46.007108 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:19:46.008382 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:19:46.008453 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:19:46.008479 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:19:46.014149 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:19:46.015865 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:19:46.020953 initrd-setup-root[752]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:19:46.024692 initrd-setup-root[760]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:19:46.028111 initrd-setup-root[768]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:19:46.031469 initrd-setup-root[776]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:19:46.054338 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:19:46.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:46.056143 systemd[1]: Starting ignition-mount.service... Feb 12 20:19:46.057644 systemd[1]: Starting sysroot-boot.service... Feb 12 20:19:46.060788 bash[793]: umount: /sysroot/usr/share/oem: not mounted. Feb 12 20:19:46.067942 ignition[794]: INFO : Ignition 2.14.0 Feb 12 20:19:46.068746 ignition[794]: INFO : Stage: mount Feb 12 20:19:46.068746 ignition[794]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:19:46.068746 ignition[794]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:19:46.070828 ignition[794]: INFO : mount: mount passed Feb 12 20:19:46.070828 ignition[794]: INFO : Ignition finished successfully Feb 12 20:19:46.071572 systemd[1]: Finished ignition-mount.service. Feb 12 20:19:46.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:46.074083 systemd[1]: Finished sysroot-boot.service. Feb 12 20:19:46.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:46.697280 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:19:46.703051 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (804) Feb 12 20:19:46.703074 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 12 20:19:46.703083 kernel: BTRFS info (device vda6): using free space tree Feb 12 20:19:46.704098 kernel: BTRFS info (device vda6): has skinny extents Feb 12 20:19:46.706832 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:19:46.707525 systemd[1]: Starting ignition-files.service... Feb 12 20:19:46.720181 ignition[824]: INFO : Ignition 2.14.0 Feb 12 20:19:46.720181 ignition[824]: INFO : Stage: files Feb 12 20:19:46.721468 ignition[824]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:19:46.721468 ignition[824]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:19:46.723203 ignition[824]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:19:46.723203 ignition[824]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:19:46.723203 ignition[824]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:19:46.726017 ignition[824]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:19:46.726017 ignition[824]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:19:46.726017 ignition[824]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:19:46.726017 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:19:46.726017 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 12 20:19:46.724485 unknown[824]: wrote ssh authorized keys file for user: core Feb 12 20:19:46.755468 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 12 20:19:46.824336 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 12 20:19:46.825896 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 20:19:46.825896 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 12 20:19:47.102224 systemd-networkd[708]: eth0: Gained IPv6LL Feb 12 20:19:47.179227 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 12 20:19:47.252142 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 12 20:19:47.255857 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 12 20:19:47.255857 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 20:19:47.255857 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 12 20:19:47.563963 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 12 20:19:47.744090 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 12 20:19:47.746434 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 12 20:19:47.746434 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:19:47.746434 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:19:47.746434 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:19:47.746434 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Feb 12 20:19:47.809692 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 12 20:19:48.004425 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Feb 12 20:19:48.004425 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:19:48.007627 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:19:48.007627 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 12 20:19:48.052560 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 12 20:19:48.394785 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 12 20:19:48.394785 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:19:48.398212 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:19:48.398212 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 12 20:19:48.442919 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 12 20:19:48.642978 ignition[824]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 12 20:19:48.645159 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:19:48.645159 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:19:48.645159 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 12 20:19:48.732219 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 12 20:19:48.804366 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 12 20:19:48.804366 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:19:48.804366 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:19:48.804366 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:19:48.804366 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:19:48.804366 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:19:48.804366 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:19:48.804366 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:19:48.804366 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:19:48.804366 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:19:48.804366 ignition[824]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:19:48.804366 ignition[824]: INFO : files: op(10): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(10): op(11): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(10): op(11): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(10): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(12): [started] processing unit "prepare-critools.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(12): op(13): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(12): op(13): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(12): [finished] processing unit "prepare-critools.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:19:48.822048 ignition[824]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Feb 12 20:19:48.862448 kernel: kauditd_printk_skb: 24 callbacks suppressed Feb 12 20:19:48.862474 kernel: audit: type=1130 audit(1707769188.824:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.862999 kernel: audit: type=1130 audit(1707769188.833:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.863013 kernel: audit: type=1131 audit(1707769188.833:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.863022 kernel: audit: type=1130 audit(1707769188.838:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.863032 kernel: audit: type=1130 audit(1707769188.855:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.863042 kernel: audit: type=1131 audit(1707769188.855:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.863195 ignition[824]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 20:19:48.863195 ignition[824]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Feb 12 20:19:48.863195 ignition[824]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:19:48.863195 ignition[824]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 12 20:19:48.863195 ignition[824]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Feb 12 20:19:48.863195 ignition[824]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:19:48.863195 ignition[824]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:19:48.863195 ignition[824]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:19:48.863195 ignition[824]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:19:48.863195 ignition[824]: INFO : files: files passed Feb 12 20:19:48.863195 ignition[824]: INFO : Ignition finished successfully Feb 12 20:19:48.878383 kernel: audit: type=1130 audit(1707769188.873:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.823129 systemd[1]: Finished ignition-files.service. Feb 12 20:19:48.825019 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:19:48.879944 initrd-setup-root-after-ignition[847]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 12 20:19:48.829023 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:19:48.882957 initrd-setup-root-after-ignition[850]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:19:48.829550 systemd[1]: Starting ignition-quench.service... Feb 12 20:19:48.831997 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:19:48.832102 systemd[1]: Finished ignition-quench.service. Feb 12 20:19:48.890226 kernel: audit: type=1131 audit(1707769188.887:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.833475 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:19:48.839834 systemd[1]: Reached target ignition-complete.target. Feb 12 20:19:48.844646 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:19:48.855044 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:19:48.855134 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:19:48.856073 systemd[1]: Reached target initrd-fs.target. Feb 12 20:19:48.862464 systemd[1]: Reached target initrd.target. Feb 12 20:19:48.863096 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:19:48.863887 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:19:48.872364 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:19:48.874213 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:19:48.882043 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:19:48.906381 kernel: audit: type=1131 audit(1707769188.903:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.883126 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:19:48.884757 systemd[1]: Stopped target timers.target. Feb 12 20:19:48.910570 kernel: audit: type=1131 audit(1707769188.906:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.885866 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:19:48.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.885948 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:19:48.887275 systemd[1]: Stopped target initrd.target. Feb 12 20:19:48.890355 systemd[1]: Stopped target basic.target. Feb 12 20:19:48.891596 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:19:48.892864 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:19:48.894086 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:19:48.895323 systemd[1]: Stopped target remote-fs.target. Feb 12 20:19:48.896429 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:19:48.897516 systemd[1]: Stopped target sysinit.target. Feb 12 20:19:48.898584 systemd[1]: Stopped target local-fs.target. Feb 12 20:19:48.899954 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:19:48.901094 systemd[1]: Stopped target swap.target. Feb 12 20:19:48.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.902096 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:19:48.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.902181 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:19:48.903257 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:19:48.924002 iscsid[713]: iscsid shutting down. Feb 12 20:19:48.906438 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:19:48.906540 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:19:48.907663 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:19:48.907761 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:19:48.910699 systemd[1]: Stopped target paths.target. Feb 12 20:19:48.911816 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:19:48.916057 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:19:48.917295 systemd[1]: Stopped target slices.target. Feb 12 20:19:48.931899 ignition[864]: INFO : Ignition 2.14.0 Feb 12 20:19:48.931899 ignition[864]: INFO : Stage: umount Feb 12 20:19:48.931899 ignition[864]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 12 20:19:48.931899 ignition[864]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 12 20:19:48.931899 ignition[864]: INFO : umount: umount passed Feb 12 20:19:48.931899 ignition[864]: INFO : Ignition finished successfully Feb 12 20:19:48.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.918282 systemd[1]: Stopped target sockets.target. Feb 12 20:19:48.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.919390 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:19:48.919499 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:19:48.920655 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:19:48.920737 systemd[1]: Stopped ignition-files.service. Feb 12 20:19:48.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.922406 systemd[1]: Stopping ignition-mount.service... Feb 12 20:19:48.925129 systemd[1]: Stopping iscsid.service... Feb 12 20:19:48.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.926491 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:19:48.928682 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:19:48.928822 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:19:48.930266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:19:48.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.930382 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:19:48.933211 systemd[1]: iscsid.service: Deactivated successfully. Feb 12 20:19:48.933290 systemd[1]: Stopped iscsid.service. Feb 12 20:19:48.934632 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:19:48.934693 systemd[1]: Stopped ignition-mount.service. Feb 12 20:19:48.936020 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:19:48.936085 systemd[1]: Closed iscsid.socket. Feb 12 20:19:48.936739 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:19:48.936767 systemd[1]: Stopped ignition-disks.service. Feb 12 20:19:48.937905 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:19:48.937932 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:19:48.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.938533 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:19:48.938570 systemd[1]: Stopped ignition-setup.service. Feb 12 20:19:48.939750 systemd[1]: Stopping iscsiuio.service... Feb 12 20:19:48.942412 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:19:48.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.942739 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 12 20:19:48.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.942803 systemd[1]: Stopped iscsiuio.service. Feb 12 20:19:48.943695 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:19:48.943761 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:19:48.945449 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:19:48.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.945506 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:19:48.946334 systemd[1]: Stopped target network.target. Feb 12 20:19:48.947352 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:19:48.947375 systemd[1]: Closed iscsiuio.socket. Feb 12 20:19:48.948519 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:19:48.948546 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:19:48.949710 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:19:48.950909 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:19:48.956020 systemd-networkd[708]: eth0: DHCPv6 lease lost Feb 12 20:19:48.973000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:19:48.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.956893 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:19:48.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.974000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:19:48.956962 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:19:48.959313 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:19:48.959346 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:19:48.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.960495 systemd[1]: Stopping network-cleanup.service... Feb 12 20:19:48.961056 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:19:48.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.961094 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:19:48.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.962212 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:19:48.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.962244 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:19:48.963456 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:19:48.963492 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:19:48.964761 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:19:48.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:48.966904 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:19:48.967264 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:19:48.967338 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:19:48.973566 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:19:48.973755 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:19:48.974668 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:19:48.974760 systemd[1]: Stopped network-cleanup.service. Feb 12 20:19:48.975313 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:19:48.975351 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:19:48.976664 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:19:48.976690 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:19:48.977724 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:19:48.977755 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:19:48.978916 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:19:48.978944 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:19:48.979946 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:19:48.979974 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:19:48.981586 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:19:48.982358 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 12 20:19:48.982397 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 12 20:19:48.983571 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:19:48.983599 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:19:48.984190 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:19:48.984216 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:19:48.985599 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 12 20:19:48.987700 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:19:48.987761 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:19:48.988634 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:19:48.989160 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:19:49.002646 systemd[1]: Switching root. Feb 12 20:19:49.021111 systemd-journald[198]: Journal stopped Feb 12 20:19:51.924462 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Feb 12 20:19:51.924535 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:19:51.924578 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:19:51.924594 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:19:51.924607 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:19:51.924624 kernel: SELinux: policy capability open_perms=1 Feb 12 20:19:51.924642 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:19:51.924655 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:19:51.924668 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:19:51.924682 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:19:51.924695 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:19:51.924708 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:19:51.924723 systemd[1]: Successfully loaded SELinux policy in 35.132ms. Feb 12 20:19:51.924747 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.273ms. Feb 12 20:19:51.924766 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:19:51.924781 systemd[1]: Detected virtualization kvm. Feb 12 20:19:51.924796 systemd[1]: Detected architecture x86-64. Feb 12 20:19:51.924810 systemd[1]: Detected first boot. Feb 12 20:19:51.924828 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:19:51.924843 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:19:51.924857 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:19:51.924875 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:19:51.924891 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:19:51.924907 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:19:51.924922 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 12 20:19:51.924935 systemd[1]: Stopped initrd-switch-root.service. Feb 12 20:19:51.924953 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 12 20:19:51.924966 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:19:51.924980 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:19:51.925011 systemd[1]: Created slice system-getty.slice. Feb 12 20:19:51.925025 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:19:51.925039 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:19:51.925052 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:19:51.925066 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:19:51.925081 systemd[1]: Created slice user.slice. Feb 12 20:19:51.925095 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:19:51.925109 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:19:51.925124 systemd[1]: Set up automount boot.automount. Feb 12 20:19:51.925141 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:19:51.925158 systemd[1]: Stopped target initrd-switch-root.target. Feb 12 20:19:51.925173 systemd[1]: Stopped target initrd-fs.target. Feb 12 20:19:51.925187 systemd[1]: Stopped target initrd-root-fs.target. Feb 12 20:19:51.925201 systemd[1]: Reached target integritysetup.target. Feb 12 20:19:51.925216 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:19:51.925230 systemd[1]: Reached target remote-fs.target. Feb 12 20:19:51.925254 systemd[1]: Reached target slices.target. Feb 12 20:19:51.925272 systemd[1]: Reached target swap.target. Feb 12 20:19:51.925287 systemd[1]: Reached target torcx.target. Feb 12 20:19:51.925305 systemd[1]: Reached target veritysetup.target. Feb 12 20:19:51.925319 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:19:51.925332 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:19:51.925347 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:19:51.925361 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:19:51.925375 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:19:51.925388 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:19:51.925402 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:19:51.925418 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:19:51.925456 systemd[1]: Mounting media.mount... Feb 12 20:19:51.925471 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:19:51.925485 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:19:51.925500 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:19:51.925513 systemd[1]: Mounting tmp.mount... Feb 12 20:19:51.925527 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:19:51.925540 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:19:51.925554 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:19:51.925570 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:19:51.925601 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:19:51.925615 systemd[1]: Starting modprobe@drm.service... Feb 12 20:19:51.925629 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:19:51.925643 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:19:51.925657 systemd[1]: Starting modprobe@loop.service... Feb 12 20:19:51.925671 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:19:51.925685 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 12 20:19:51.925700 systemd[1]: Stopped systemd-fsck-root.service. Feb 12 20:19:51.925720 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 12 20:19:51.925734 systemd[1]: Stopped systemd-fsck-usr.service. Feb 12 20:19:51.925748 systemd[1]: Stopped systemd-journald.service. Feb 12 20:19:51.925763 kernel: fuse: init (API version 7.34) Feb 12 20:19:51.925776 systemd[1]: Starting systemd-journald.service... Feb 12 20:19:51.925790 kernel: loop: module loaded Feb 12 20:19:51.925803 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:19:51.925819 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:19:51.925833 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:19:51.925849 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:19:51.925863 systemd[1]: verity-setup.service: Deactivated successfully. Feb 12 20:19:51.925877 systemd[1]: Stopped verity-setup.service. Feb 12 20:19:51.925891 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 12 20:19:51.925906 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:19:51.925920 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:19:51.925934 systemd[1]: Mounted media.mount. Feb 12 20:19:51.925948 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:19:51.925961 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:19:51.925977 systemd[1]: Mounted tmp.mount. Feb 12 20:19:51.926005 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:19:51.926019 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:19:51.926033 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:19:51.926049 systemd-journald[975]: Journal started Feb 12 20:19:51.926097 systemd-journald[975]: Runtime Journal (/run/log/journal/178d5f6c81314991824b145f5094d5f1) is 6.0M, max 48.5M, 42.5M free. Feb 12 20:19:49.074000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 12 20:19:49.646000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:19:49.646000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:19:49.646000 audit: BPF prog-id=10 op=LOAD Feb 12 20:19:49.646000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:19:49.646000 audit: BPF prog-id=11 op=LOAD Feb 12 20:19:49.646000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:19:49.679000 audit[898]: AVC avc: denied { associate } for pid=898 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 12 20:19:49.679000 audit[898]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001878dc a1=c00002ae40 a2=c000029b00 a3=32 items=0 ppid=881 pid=898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:19:49.679000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:19:49.680000 audit[898]: AVC avc: denied { associate } for pid=898 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 12 20:19:49.680000 audit[898]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001879b5 a2=1ed a3=0 items=2 ppid=881 pid=898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:19:49.680000 audit: CWD cwd="/" Feb 12 20:19:49.680000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:49.680000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:49.680000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 12 20:19:51.812000 audit: BPF prog-id=12 op=LOAD Feb 12 20:19:51.812000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:19:51.812000 audit: BPF prog-id=13 op=LOAD Feb 12 20:19:51.812000 audit: BPF prog-id=14 op=LOAD Feb 12 20:19:51.812000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:19:51.812000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:19:51.813000 audit: BPF prog-id=15 op=LOAD Feb 12 20:19:51.813000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:19:51.813000 audit: BPF prog-id=16 op=LOAD Feb 12 20:19:51.813000 audit: BPF prog-id=17 op=LOAD Feb 12 20:19:51.813000 audit: BPF prog-id=13 op=UNLOAD Feb 12 20:19:51.813000 audit: BPF prog-id=14 op=UNLOAD Feb 12 20:19:51.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.823000 audit: BPF prog-id=15 op=UNLOAD Feb 12 20:19:51.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.898000 audit: BPF prog-id=18 op=LOAD Feb 12 20:19:51.898000 audit: BPF prog-id=19 op=LOAD Feb 12 20:19:51.898000 audit: BPF prog-id=20 op=LOAD Feb 12 20:19:51.898000 audit: BPF prog-id=16 op=UNLOAD Feb 12 20:19:51.898000 audit: BPF prog-id=17 op=UNLOAD Feb 12 20:19:51.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.923000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:19:51.923000 audit[975]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd7afa6830 a2=4000 a3=7ffd7afa68cc items=0 ppid=1 pid=975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:19:51.923000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:19:51.927420 systemd[1]: Started systemd-journald.service. Feb 12 20:19:51.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:49.679057 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:19:51.811493 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:19:49.679239 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:19:51.811505 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 12 20:19:49.679259 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:19:51.814536 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 12 20:19:49.679284 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 12 20:19:51.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:49.679292 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 12 20:19:49.679337 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 12 20:19:49.679349 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 12 20:19:49.679549 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 12 20:19:51.928452 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:19:49.679588 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 12 20:19:49.679600 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 12 20:19:49.679963 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 12 20:19:49.680016 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 12 20:19:49.680034 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 12 20:19:49.680047 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 12 20:19:51.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:49.680066 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 12 20:19:49.680077 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:49Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 12 20:19:51.541973 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:51Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:19:51.542219 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:51Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:19:51.542330 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:51Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:19:51.929477 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:19:51.542500 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:51Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 12 20:19:51.542544 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:51Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 12 20:19:51.929634 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:19:51.542603 /usr/lib/systemd/system-generators/torcx-generator[898]: time="2024-02-12T20:19:51Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 12 20:19:51.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.930506 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:19:51.930710 systemd[1]: Finished modprobe@drm.service. Feb 12 20:19:51.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.931484 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:19:51.931646 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:19:51.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.932614 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:19:51.932840 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:19:51.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.933622 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:19:51.933798 systemd[1]: Finished modprobe@loop.service. Feb 12 20:19:51.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.934713 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:19:51.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.935589 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:19:51.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.936506 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:19:51.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.937532 systemd[1]: Reached target network-pre.target. Feb 12 20:19:51.939223 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:19:51.940701 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:19:51.941256 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:19:51.942579 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:19:51.944081 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:19:51.944722 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:19:51.946191 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:19:51.946910 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:19:51.949341 systemd-journald[975]: Time spent on flushing to /var/log/journal/178d5f6c81314991824b145f5094d5f1 is 16.262ms for 1132 entries. Feb 12 20:19:51.949341 systemd-journald[975]: System Journal (/var/log/journal/178d5f6c81314991824b145f5094d5f1) is 8.0M, max 195.6M, 187.6M free. Feb 12 20:19:51.975220 systemd-journald[975]: Received client request to flush runtime journal. Feb 12 20:19:51.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.950084 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:19:51.952430 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:19:51.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.959322 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:19:51.977295 udevadm[1001]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 20:19:51.960118 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:19:51.962420 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:19:51.964300 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:19:51.965169 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:19:51.965887 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:19:51.976142 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:19:51.979797 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:19:51.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.980630 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:19:51.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:51.982386 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:19:51.995869 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:19:51.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:52.486047 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:19:52.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:52.486000 audit: BPF prog-id=21 op=LOAD Feb 12 20:19:52.486000 audit: BPF prog-id=22 op=LOAD Feb 12 20:19:52.486000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:19:52.486000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:19:52.488063 systemd[1]: Starting systemd-udevd.service... Feb 12 20:19:52.503277 systemd-udevd[1007]: Using default interface naming scheme 'v252'. Feb 12 20:19:52.515351 systemd[1]: Started systemd-udevd.service. Feb 12 20:19:52.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:52.516000 audit: BPF prog-id=23 op=LOAD Feb 12 20:19:52.518160 systemd[1]: Starting systemd-networkd.service... Feb 12 20:19:52.523000 audit: BPF prog-id=24 op=LOAD Feb 12 20:19:52.523000 audit: BPF prog-id=25 op=LOAD Feb 12 20:19:52.523000 audit: BPF prog-id=26 op=LOAD Feb 12 20:19:52.525188 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:19:52.551029 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 12 20:19:52.551871 systemd[1]: Started systemd-userdbd.service. Feb 12 20:19:52.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:52.563372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:19:52.594015 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 12 20:19:52.597569 systemd-networkd[1017]: lo: Link UP Feb 12 20:19:52.597582 systemd-networkd[1017]: lo: Gained carrier Feb 12 20:19:52.597948 systemd-networkd[1017]: Enumeration completed Feb 12 20:19:52.598052 systemd[1]: Started systemd-networkd.service. Feb 12 20:19:52.598072 systemd-networkd[1017]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:19:52.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:52.599566 systemd-networkd[1017]: eth0: Link UP Feb 12 20:19:52.599572 systemd-networkd[1017]: eth0: Gained carrier Feb 12 20:19:52.601000 audit[1012]: AVC avc: denied { confidentiality } for pid=1012 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 12 20:19:52.611005 kernel: ACPI: button: Power Button [PWRF] Feb 12 20:19:52.601000 audit[1012]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5580f001d730 a1=32194 a2=7f9dee48cbc5 a3=5 items=108 ppid=1007 pid=1012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:19:52.601000 audit: CWD cwd="/" Feb 12 20:19:52.601000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=1 name=(null) inode=14565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=2 name=(null) inode=14565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=3 name=(null) inode=14566 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=4 name=(null) inode=14565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=5 name=(null) inode=14567 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=6 name=(null) inode=14565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=7 name=(null) inode=14568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=8 name=(null) inode=14568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=9 name=(null) inode=14569 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=10 name=(null) inode=14568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=11 name=(null) inode=14570 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=12 name=(null) inode=14568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=13 name=(null) inode=14571 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=14 name=(null) inode=14568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=15 name=(null) inode=14572 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=16 name=(null) inode=14568 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=17 name=(null) inode=14573 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=18 name=(null) inode=14565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=19 name=(null) inode=14574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=20 name=(null) inode=14574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=21 name=(null) inode=14575 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=22 name=(null) inode=14574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=23 name=(null) inode=14576 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=24 name=(null) inode=14574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=25 name=(null) inode=14577 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=26 name=(null) inode=14574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=27 name=(null) inode=14578 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=28 name=(null) inode=14574 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=29 name=(null) inode=14579 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=30 name=(null) inode=14565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=31 name=(null) inode=14580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=32 name=(null) inode=14580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=33 name=(null) inode=14581 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=34 name=(null) inode=14580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=35 name=(null) inode=14582 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=36 name=(null) inode=14580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=37 name=(null) inode=14583 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=38 name=(null) inode=14580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=39 name=(null) inode=14584 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=40 name=(null) inode=14580 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=41 name=(null) inode=14585 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=42 name=(null) inode=14565 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=43 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=44 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=45 name=(null) inode=14587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=46 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=47 name=(null) inode=14588 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=48 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=49 name=(null) inode=14589 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=50 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=51 name=(null) inode=14590 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=52 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=53 name=(null) inode=14591 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=55 name=(null) inode=14592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=56 name=(null) inode=14592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=57 name=(null) inode=14593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=58 name=(null) inode=14592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=59 name=(null) inode=14594 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=60 name=(null) inode=14592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=61 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=62 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=63 name=(null) inode=14596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=64 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=65 name=(null) inode=14597 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=66 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=67 name=(null) inode=14598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=68 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=69 name=(null) inode=14599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=70 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=71 name=(null) inode=14600 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=72 name=(null) inode=14592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=73 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=74 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=75 name=(null) inode=14602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=76 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=77 name=(null) inode=14603 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=78 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=79 name=(null) inode=14604 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=80 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=81 name=(null) inode=14605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=82 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=83 name=(null) inode=14606 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=84 name=(null) inode=14592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=85 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=86 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=87 name=(null) inode=14608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=88 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=89 name=(null) inode=14609 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=90 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=91 name=(null) inode=14610 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=92 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=93 name=(null) inode=14611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=94 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=95 name=(null) inode=14612 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=96 name=(null) inode=14592 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=97 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=98 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=99 name=(null) inode=14614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=100 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=101 name=(null) inode=14615 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=102 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=103 name=(null) inode=14616 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=104 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=105 name=(null) inode=14617 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=106 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PATH item=107 name=(null) inode=14618 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:19:52.601000 audit: PROCTITLE proctitle="(udev-worker)" Feb 12 20:19:52.621200 systemd-networkd[1017]: eth0: DHCPv4 address 10.0.0.33/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 12 20:19:52.662882 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 12 20:19:52.663005 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 12 20:19:52.670024 kernel: mousedev: PS/2 mouse device common for all mice Feb 12 20:19:52.712324 kernel: kvm: Nested Virtualization enabled Feb 12 20:19:52.712427 kernel: SVM: kvm: Nested Paging enabled Feb 12 20:19:52.712442 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 12 20:19:52.712455 kernel: SVM: Virtual GIF supported Feb 12 20:19:52.730016 kernel: EDAC MC: Ver: 3.0.0 Feb 12 20:19:52.750419 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:19:52.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:52.752376 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:19:52.760332 lvm[1042]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:19:52.786847 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:19:52.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:52.787668 systemd[1]: Reached target cryptsetup.target. Feb 12 20:19:52.789279 systemd[1]: Starting lvm2-activation.service... Feb 12 20:19:52.792504 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:19:52.818697 systemd[1]: Finished lvm2-activation.service. Feb 12 20:19:52.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:52.819447 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:19:52.820069 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:19:52.820093 systemd[1]: Reached target local-fs.target. Feb 12 20:19:52.820669 systemd[1]: Reached target machines.target. Feb 12 20:19:52.822135 systemd[1]: Starting ldconfig.service... Feb 12 20:19:52.822857 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:19:52.822891 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:19:52.823659 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:19:52.825183 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:19:52.827174 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:19:52.828771 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:19:52.828810 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:19:52.829765 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:19:52.830838 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1045 (bootctl) Feb 12 20:19:52.831935 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:19:52.833498 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:19:52.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:52.843894 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:19:52.844761 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:19:52.846058 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:19:52.862015 systemd-fsck[1054]: fsck.fat 4.2 (2021-01-31) Feb 12 20:19:52.862015 systemd-fsck[1054]: /dev/vda1: 789 files, 115339/258078 clusters Feb 12 20:19:52.862338 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:19:52.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:52.865121 systemd[1]: Mounting boot.mount... Feb 12 20:19:53.083417 systemd[1]: Mounted boot.mount. Feb 12 20:19:53.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:53.094615 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:19:53.362207 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:19:53.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:53.367728 systemd[1]: Starting audit-rules.service... Feb 12 20:19:53.369370 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:19:53.370845 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:19:53.371000 audit: BPF prog-id=27 op=LOAD Feb 12 20:19:53.372899 systemd[1]: Starting systemd-resolved.service... Feb 12 20:19:53.373000 audit: BPF prog-id=28 op=LOAD Feb 12 20:19:53.375643 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:19:53.379357 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:19:53.381829 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:19:53.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:53.382481 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:19:53.383000 audit[1068]: SYSTEM_BOOT pid=1068 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:19:53.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:53.383430 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:19:53.385856 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:19:53.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:53.387053 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:19:53.402635 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:19:53.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:19:53.412911 augenrules[1077]: No rules Feb 12 20:19:53.411000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:19:53.411000 audit[1077]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc08928a40 a2=420 a3=0 items=0 ppid=1057 pid=1077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:19:53.411000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:19:53.413629 systemd[1]: Finished audit-rules.service. Feb 12 20:19:53.431289 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:19:53.446134 systemd[1]: Reached target time-set.target. Feb 12 20:19:53.452567 systemd-timesyncd[1067]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 12 20:19:53.452651 systemd-timesyncd[1067]: Initial clock synchronization to Mon 2024-02-12 20:19:53.781773 UTC. Feb 12 20:19:53.468048 systemd-resolved[1061]: Positive Trust Anchors: Feb 12 20:19:53.468063 systemd-resolved[1061]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:19:53.468098 systemd-resolved[1061]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:19:53.469464 ldconfig[1044]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:19:53.474096 systemd[1]: Finished ldconfig.service. Feb 12 20:19:53.475884 systemd[1]: Starting systemd-update-done.service... Feb 12 20:19:53.478769 systemd-resolved[1061]: Defaulting to hostname 'linux'. Feb 12 20:19:53.480284 systemd[1]: Started systemd-resolved.service. Feb 12 20:19:53.480946 systemd[1]: Reached target network.target. Feb 12 20:19:53.481524 systemd[1]: Reached target nss-lookup.target. Feb 12 20:19:53.483422 systemd[1]: Finished systemd-update-done.service. Feb 12 20:19:53.484084 systemd[1]: Reached target sysinit.target. Feb 12 20:19:53.484711 systemd[1]: Started motdgen.path. Feb 12 20:19:53.485236 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:19:53.486097 systemd[1]: Started logrotate.timer. Feb 12 20:19:53.486673 systemd[1]: Started mdadm.timer. Feb 12 20:19:53.487133 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:19:53.487733 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:19:53.487754 systemd[1]: Reached target paths.target. Feb 12 20:19:53.488272 systemd[1]: Reached target timers.target. Feb 12 20:19:53.489020 systemd[1]: Listening on dbus.socket. Feb 12 20:19:53.490331 systemd[1]: Starting docker.socket... Feb 12 20:19:53.492352 systemd[1]: Listening on sshd.socket. Feb 12 20:19:53.492931 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:19:53.493238 systemd[1]: Listening on docker.socket. Feb 12 20:19:53.493794 systemd[1]: Reached target sockets.target. Feb 12 20:19:53.494336 systemd[1]: Reached target basic.target. Feb 12 20:19:53.494863 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:19:53.494880 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:19:53.495549 systemd[1]: Starting containerd.service... Feb 12 20:19:53.496789 systemd[1]: Starting dbus.service... Feb 12 20:19:53.497895 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:19:53.499390 systemd[1]: Starting extend-filesystems.service... Feb 12 20:19:53.500035 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:19:53.500997 systemd[1]: Starting motdgen.service... Feb 12 20:19:53.502075 jq[1088]: false Feb 12 20:19:53.525220 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:19:53.527357 systemd[1]: Starting prepare-critools.service... Feb 12 20:19:53.528776 systemd[1]: Starting prepare-helm.service... Feb 12 20:19:53.530214 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:19:53.531571 systemd[1]: Starting sshd-keygen.service... Feb 12 20:19:53.533898 systemd[1]: Starting systemd-logind.service... Feb 12 20:19:53.534641 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:19:53.538403 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:19:53.538852 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 12 20:19:53.539117 dbus-daemon[1087]: [system] SELinux support is enabled Feb 12 20:19:53.539559 systemd[1]: Starting update-engine.service... Feb 12 20:19:53.546005 extend-filesystems[1089]: Found sr0 Feb 12 20:19:53.546005 extend-filesystems[1089]: Found vda Feb 12 20:19:53.546005 extend-filesystems[1089]: Found vda1 Feb 12 20:19:53.546005 extend-filesystems[1089]: Found vda2 Feb 12 20:19:53.546005 extend-filesystems[1089]: Found vda3 Feb 12 20:19:53.546005 extend-filesystems[1089]: Found usr Feb 12 20:19:53.546005 extend-filesystems[1089]: Found vda4 Feb 12 20:19:53.546005 extend-filesystems[1089]: Found vda6 Feb 12 20:19:53.546005 extend-filesystems[1089]: Found vda7 Feb 12 20:19:53.546005 extend-filesystems[1089]: Found vda9 Feb 12 20:19:53.546005 extend-filesystems[1089]: Checking size of /dev/vda9 Feb 12 20:19:53.577962 extend-filesystems[1089]: Resized partition /dev/vda9 Feb 12 20:19:53.583309 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 12 20:19:53.547520 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:19:53.583414 extend-filesystems[1122]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:19:53.550504 systemd[1]: Started dbus.service. Feb 12 20:19:53.584256 jq[1110]: true Feb 12 20:19:53.559373 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:19:53.559507 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:19:53.584707 tar[1116]: crictl Feb 12 20:19:53.559715 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:19:53.589191 tar[1118]: linux-amd64/helm Feb 12 20:19:53.559829 systemd[1]: Finished motdgen.service. Feb 12 20:19:53.589506 jq[1121]: true Feb 12 20:19:53.563157 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:19:53.563298 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:19:53.589893 tar[1115]: ./ Feb 12 20:19:53.589893 tar[1115]: ./loopback Feb 12 20:19:53.566954 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:19:53.566971 systemd[1]: Reached target system-config.target. Feb 12 20:19:53.568654 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:19:53.568667 systemd[1]: Reached target user-config.target. Feb 12 20:19:53.618058 update_engine[1107]: I0212 20:19:53.617782 1107 main.cc:92] Flatcar Update Engine starting Feb 12 20:19:53.625824 systemd[1]: Started update-engine.service. Feb 12 20:19:53.626335 update_engine[1107]: I0212 20:19:53.625831 1107 update_check_scheduler.cc:74] Next update check in 6m44s Feb 12 20:19:53.632454 tar[1115]: ./bandwidth Feb 12 20:19:53.635997 systemd[1]: Started locksmithd.service. Feb 12 20:19:53.638013 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 12 20:19:53.657150 extend-filesystems[1122]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 12 20:19:53.657150 extend-filesystems[1122]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 20:19:53.657150 extend-filesystems[1122]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 12 20:19:53.671225 extend-filesystems[1089]: Resized filesystem in /dev/vda9 Feb 12 20:19:53.672060 bash[1140]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:19:53.659263 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:19:53.659400 systemd[1]: Finished extend-filesystems.service. Feb 12 20:19:53.669614 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:19:53.672314 systemd-logind[1104]: Watching system buttons on /dev/input/event1 (Power Button) Feb 12 20:19:53.672328 systemd-logind[1104]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 12 20:19:53.675865 systemd-logind[1104]: New seat seat0. Feb 12 20:19:53.677307 env[1123]: time="2024-02-12T20:19:53.677248058Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:19:53.680690 systemd[1]: Started systemd-logind.service. Feb 12 20:19:53.716335 tar[1115]: ./ptp Feb 12 20:19:53.754211 tar[1115]: ./vlan Feb 12 20:19:53.765822 systemd[1]: Created slice system-sshd.slice. Feb 12 20:19:53.779581 env[1123]: time="2024-02-12T20:19:53.779536943Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:19:53.780039 env[1123]: time="2024-02-12T20:19:53.780023355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:19:53.790199 env[1123]: time="2024-02-12T20:19:53.789098558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:19:53.790199 env[1123]: time="2024-02-12T20:19:53.789149543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:19:53.790199 env[1123]: time="2024-02-12T20:19:53.789425872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:19:53.790199 env[1123]: time="2024-02-12T20:19:53.789447031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:19:53.790199 env[1123]: time="2024-02-12T20:19:53.789461388Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:19:53.790199 env[1123]: time="2024-02-12T20:19:53.789473000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:19:53.790199 env[1123]: time="2024-02-12T20:19:53.789555114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:19:53.790199 env[1123]: time="2024-02-12T20:19:53.789790405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:19:53.790199 env[1123]: time="2024-02-12T20:19:53.789952840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:19:53.790199 env[1123]: time="2024-02-12T20:19:53.789973308Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:19:53.790586 env[1123]: time="2024-02-12T20:19:53.790042598Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:19:53.790586 env[1123]: time="2024-02-12T20:19:53.790058057Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:19:53.791588 tar[1115]: ./host-device Feb 12 20:19:53.828716 tar[1115]: ./tuning Feb 12 20:19:53.860743 tar[1115]: ./vrf Feb 12 20:19:53.913151 tar[1115]: ./sbr Feb 12 20:19:53.954539 systemd-networkd[1017]: eth0: Gained IPv6LL Feb 12 20:19:53.963860 tar[1115]: ./tap Feb 12 20:19:53.966413 env[1123]: time="2024-02-12T20:19:53.966360230Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:19:53.966461 env[1123]: time="2024-02-12T20:19:53.966419020Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:19:53.966461 env[1123]: time="2024-02-12T20:19:53.966431794Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:19:53.966511 env[1123]: time="2024-02-12T20:19:53.966480295Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:19:53.966511 env[1123]: time="2024-02-12T20:19:53.966495994Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:19:53.966511 env[1123]: time="2024-02-12T20:19:53.966507907Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:19:53.966568 env[1123]: time="2024-02-12T20:19:53.966518837Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:19:53.966568 env[1123]: time="2024-02-12T20:19:53.966533795Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:19:53.966568 env[1123]: time="2024-02-12T20:19:53.966544986Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:19:53.966568 env[1123]: time="2024-02-12T20:19:53.966557400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:19:53.966640 env[1123]: time="2024-02-12T20:19:53.966569272Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:19:53.966640 env[1123]: time="2024-02-12T20:19:53.966582086Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:19:53.966751 env[1123]: time="2024-02-12T20:19:53.966727719Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:19:53.966853 env[1123]: time="2024-02-12T20:19:53.966825933Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:19:53.967238 env[1123]: time="2024-02-12T20:19:53.967193813Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:19:53.967280 env[1123]: time="2024-02-12T20:19:53.967255770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:19:53.967280 env[1123]: time="2024-02-12T20:19:53.967270848Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:19:53.967361 env[1123]: time="2024-02-12T20:19:53.967339857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:19:53.967361 env[1123]: time="2024-02-12T20:19:53.967358382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:19:53.967419 env[1123]: time="2024-02-12T20:19:53.967370535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:19:53.967419 env[1123]: time="2024-02-12T20:19:53.967381666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:19:53.967419 env[1123]: time="2024-02-12T20:19:53.967394129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:19:53.967419 env[1123]: time="2024-02-12T20:19:53.967406502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:19:53.967419 env[1123]: time="2024-02-12T20:19:53.967417323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:19:53.967513 env[1123]: time="2024-02-12T20:19:53.967427983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:19:53.967513 env[1123]: time="2024-02-12T20:19:53.967452939Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:19:53.967637 env[1123]: time="2024-02-12T20:19:53.967612378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:19:53.967637 env[1123]: time="2024-02-12T20:19:53.967634971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:19:53.967697 env[1123]: time="2024-02-12T20:19:53.967646763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:19:53.967697 env[1123]: time="2024-02-12T20:19:53.967657583Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:19:53.967697 env[1123]: time="2024-02-12T20:19:53.967671049Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:19:53.967697 env[1123]: time="2024-02-12T20:19:53.967681829Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:19:53.967771 env[1123]: time="2024-02-12T20:19:53.967698700Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:19:53.967771 env[1123]: time="2024-02-12T20:19:53.967733776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:19:53.967959 env[1123]: time="2024-02-12T20:19:53.967905669Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:19:53.967959 env[1123]: time="2024-02-12T20:19:53.967959459Z" level=info msg="Connect containerd service" Feb 12 20:19:53.968960 env[1123]: time="2024-02-12T20:19:53.968002841Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:19:53.968960 env[1123]: time="2024-02-12T20:19:53.968532324Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:19:53.968960 env[1123]: time="2024-02-12T20:19:53.968642290Z" level=info msg="Start subscribing containerd event" Feb 12 20:19:53.968960 env[1123]: time="2024-02-12T20:19:53.968680883Z" level=info msg="Start recovering state" Feb 12 20:19:53.968960 env[1123]: time="2024-02-12T20:19:53.968730556Z" level=info msg="Start event monitor" Feb 12 20:19:53.968960 env[1123]: time="2024-02-12T20:19:53.968740074Z" level=info msg="Start snapshots syncer" Feb 12 20:19:53.968960 env[1123]: time="2024-02-12T20:19:53.968740735Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:19:53.968960 env[1123]: time="2024-02-12T20:19:53.968747818Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:19:53.968960 env[1123]: time="2024-02-12T20:19:53.968770581Z" level=info msg="Start streaming server" Feb 12 20:19:53.968960 env[1123]: time="2024-02-12T20:19:53.968775200Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:19:53.968900 systemd[1]: Started containerd.service. Feb 12 20:19:53.969931 env[1123]: time="2024-02-12T20:19:53.969023475Z" level=info msg="containerd successfully booted in 0.320098s" Feb 12 20:19:54.009158 tar[1115]: ./dhcp Feb 12 20:19:54.156886 tar[1115]: ./static Feb 12 20:19:54.185435 tar[1115]: ./firewall Feb 12 20:19:54.198139 tar[1118]: linux-amd64/LICENSE Feb 12 20:19:54.198380 tar[1118]: linux-amd64/README.md Feb 12 20:19:54.202772 systemd[1]: Finished prepare-helm.service. Feb 12 20:19:54.211878 locksmithd[1146]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:19:54.225368 systemd[1]: Finished prepare-critools.service. Feb 12 20:19:54.230909 tar[1115]: ./macvlan Feb 12 20:19:54.261701 tar[1115]: ./dummy Feb 12 20:19:54.294823 tar[1115]: ./bridge Feb 12 20:19:54.332407 tar[1115]: ./ipvlan Feb 12 20:19:54.371608 tar[1115]: ./portmap Feb 12 20:19:54.401196 tar[1115]: ./host-local Feb 12 20:19:54.439339 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:19:55.534963 sshd_keygen[1111]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:19:55.552510 systemd[1]: Finished sshd-keygen.service. Feb 12 20:19:55.554428 systemd[1]: Starting issuegen.service... Feb 12 20:19:55.555634 systemd[1]: Started sshd@0-10.0.0.33:22-10.0.0.1:35204.service. Feb 12 20:19:55.560255 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:19:55.560452 systemd[1]: Finished issuegen.service. Feb 12 20:19:55.562771 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:19:55.569841 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:19:55.572109 systemd[1]: Started getty@tty1.service. Feb 12 20:19:55.573739 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:19:55.574641 systemd[1]: Reached target getty.target. Feb 12 20:19:55.575396 systemd[1]: Reached target multi-user.target. Feb 12 20:19:55.577099 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:19:55.583739 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:19:55.583856 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:19:55.584640 systemd[1]: Startup finished in 541ms (kernel) + 5.363s (initrd) + 6.545s (userspace) = 12.450s. Feb 12 20:19:55.593481 sshd[1171]: Accepted publickey for core from 10.0.0.1 port 35204 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:19:55.594753 sshd[1171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:19:55.601280 systemd[1]: Created slice user-500.slice. Feb 12 20:19:55.602223 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:19:55.603493 systemd-logind[1104]: New session 1 of user core. Feb 12 20:19:55.609191 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:19:55.610268 systemd[1]: Starting user@500.service... Feb 12 20:19:55.612338 (systemd)[1180]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:19:55.675111 systemd[1180]: Queued start job for default target default.target. Feb 12 20:19:55.675622 systemd[1180]: Reached target paths.target. Feb 12 20:19:55.675653 systemd[1180]: Reached target sockets.target. Feb 12 20:19:55.675675 systemd[1180]: Reached target timers.target. Feb 12 20:19:55.675689 systemd[1180]: Reached target basic.target. Feb 12 20:19:55.675737 systemd[1180]: Reached target default.target. Feb 12 20:19:55.675767 systemd[1180]: Startup finished in 58ms. Feb 12 20:19:55.675843 systemd[1]: Started user@500.service. Feb 12 20:19:55.676889 systemd[1]: Started session-1.scope. Feb 12 20:19:55.730056 systemd[1]: Started sshd@1-10.0.0.33:22-10.0.0.1:49240.service. Feb 12 20:19:55.762166 sshd[1189]: Accepted publickey for core from 10.0.0.1 port 49240 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:19:55.763320 sshd[1189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:19:55.767195 systemd-logind[1104]: New session 2 of user core. Feb 12 20:19:55.768186 systemd[1]: Started session-2.scope. Feb 12 20:19:55.825195 sshd[1189]: pam_unix(sshd:session): session closed for user core Feb 12 20:19:55.828202 systemd[1]: sshd@1-10.0.0.33:22-10.0.0.1:49240.service: Deactivated successfully. Feb 12 20:19:55.828863 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:19:55.829430 systemd-logind[1104]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:19:55.830881 systemd[1]: Started sshd@2-10.0.0.33:22-10.0.0.1:49246.service. Feb 12 20:19:55.831633 systemd-logind[1104]: Removed session 2. Feb 12 20:19:55.861266 sshd[1195]: Accepted publickey for core from 10.0.0.1 port 49246 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:19:55.862247 sshd[1195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:19:55.865205 systemd-logind[1104]: New session 3 of user core. Feb 12 20:19:55.865931 systemd[1]: Started session-3.scope. Feb 12 20:19:55.918382 sshd[1195]: pam_unix(sshd:session): session closed for user core Feb 12 20:19:55.921804 systemd[1]: sshd@2-10.0.0.33:22-10.0.0.1:49246.service: Deactivated successfully. Feb 12 20:19:55.922434 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:19:55.922965 systemd-logind[1104]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:19:55.924204 systemd[1]: Started sshd@3-10.0.0.33:22-10.0.0.1:49248.service. Feb 12 20:19:55.924950 systemd-logind[1104]: Removed session 3. Feb 12 20:19:55.957292 sshd[1201]: Accepted publickey for core from 10.0.0.1 port 49248 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:19:55.958499 sshd[1201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:19:55.962104 systemd-logind[1104]: New session 4 of user core. Feb 12 20:19:55.962998 systemd[1]: Started session-4.scope. Feb 12 20:19:56.017957 sshd[1201]: pam_unix(sshd:session): session closed for user core Feb 12 20:19:56.020479 systemd[1]: sshd@3-10.0.0.33:22-10.0.0.1:49248.service: Deactivated successfully. Feb 12 20:19:56.020982 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:19:56.021476 systemd-logind[1104]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:19:56.022691 systemd[1]: Started sshd@4-10.0.0.33:22-10.0.0.1:49256.service. Feb 12 20:19:56.023652 systemd-logind[1104]: Removed session 4. Feb 12 20:19:56.055487 sshd[1207]: Accepted publickey for core from 10.0.0.1 port 49256 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:19:56.056910 sshd[1207]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:19:56.061045 systemd-logind[1104]: New session 5 of user core. Feb 12 20:19:56.062126 systemd[1]: Started session-5.scope. Feb 12 20:19:56.118843 sudo[1210]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:19:56.119005 sudo[1210]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:19:56.652661 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:19:57.353690 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:19:57.354059 systemd[1]: Reached target network-online.target. Feb 12 20:19:57.355341 systemd[1]: Starting docker.service... Feb 12 20:19:57.385770 env[1228]: time="2024-02-12T20:19:57.385719609Z" level=info msg="Starting up" Feb 12 20:19:57.386923 env[1228]: time="2024-02-12T20:19:57.386895238Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:19:57.386923 env[1228]: time="2024-02-12T20:19:57.386918958Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:19:57.387011 env[1228]: time="2024-02-12T20:19:57.386942534Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:19:57.387011 env[1228]: time="2024-02-12T20:19:57.386955516Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:19:57.388314 env[1228]: time="2024-02-12T20:19:57.388294375Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:19:57.388400 env[1228]: time="2024-02-12T20:19:57.388379546Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:19:57.388493 env[1228]: time="2024-02-12T20:19:57.388472871Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:19:57.388573 env[1228]: time="2024-02-12T20:19:57.388553399Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:19:58.057047 env[1228]: time="2024-02-12T20:19:58.056983402Z" level=info msg="Loading containers: start." Feb 12 20:19:58.136035 kernel: Initializing XFRM netlink socket Feb 12 20:19:58.163650 env[1228]: time="2024-02-12T20:19:58.163605909Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 20:19:58.206809 systemd-networkd[1017]: docker0: Link UP Feb 12 20:19:58.215265 env[1228]: time="2024-02-12T20:19:58.215219698Z" level=info msg="Loading containers: done." Feb 12 20:19:58.239666 env[1228]: time="2024-02-12T20:19:58.239616360Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 20:19:58.239840 env[1228]: time="2024-02-12T20:19:58.239799686Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 20:19:58.239925 env[1228]: time="2024-02-12T20:19:58.239902384Z" level=info msg="Daemon has completed initialization" Feb 12 20:19:58.254515 systemd[1]: Started docker.service. Feb 12 20:19:58.260981 env[1228]: time="2024-02-12T20:19:58.260931762Z" level=info msg="API listen on /run/docker.sock" Feb 12 20:19:58.275489 systemd[1]: Reloading. Feb 12 20:19:58.343699 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2024-02-12T20:19:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:19:58.343737 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2024-02-12T20:19:58Z" level=info msg="torcx already run" Feb 12 20:19:58.403396 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:19:58.403411 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:19:58.423074 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:19:58.490145 systemd[1]: Started kubelet.service. Feb 12 20:19:58.572630 kubelet[1412]: E0212 20:19:58.572557 1412 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 20:19:58.575108 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:19:58.575222 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:19:58.866028 env[1123]: time="2024-02-12T20:19:58.865945301Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 12 20:19:59.561148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1848769386.mount: Deactivated successfully. Feb 12 20:20:01.448386 env[1123]: time="2024-02-12T20:20:01.448311521Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:01.450179 env[1123]: time="2024-02-12T20:20:01.450139071Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:01.451802 env[1123]: time="2024-02-12T20:20:01.451783203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:01.453621 env[1123]: time="2024-02-12T20:20:01.453590707Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:01.454508 env[1123]: time="2024-02-12T20:20:01.454463270Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5\"" Feb 12 20:20:01.462486 env[1123]: time="2024-02-12T20:20:01.462451291Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 12 20:20:03.697827 env[1123]: time="2024-02-12T20:20:03.697754498Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:03.699514 env[1123]: time="2024-02-12T20:20:03.699463540Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:03.701066 env[1123]: time="2024-02-12T20:20:03.701042054Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:03.702577 env[1123]: time="2024-02-12T20:20:03.702546668Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:03.704419 env[1123]: time="2024-02-12T20:20:03.704388550Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0\"" Feb 12 20:20:03.721312 env[1123]: time="2024-02-12T20:20:03.721275221Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 12 20:20:05.242145 env[1123]: time="2024-02-12T20:20:05.242091716Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:05.244070 env[1123]: time="2024-02-12T20:20:05.244041078Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:05.246074 env[1123]: time="2024-02-12T20:20:05.246040210Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:05.247769 env[1123]: time="2024-02-12T20:20:05.247737012Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:05.248611 env[1123]: time="2024-02-12T20:20:05.248583744Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975\"" Feb 12 20:20:05.261473 env[1123]: time="2024-02-12T20:20:05.261430568Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 12 20:20:06.374088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2592056752.mount: Deactivated successfully. Feb 12 20:20:07.404107 env[1123]: time="2024-02-12T20:20:07.404031643Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:07.406067 env[1123]: time="2024-02-12T20:20:07.405983488Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:07.407327 env[1123]: time="2024-02-12T20:20:07.407298046Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:07.409131 env[1123]: time="2024-02-12T20:20:07.409086055Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:07.409686 env[1123]: time="2024-02-12T20:20:07.409652449Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 12 20:20:07.423528 env[1123]: time="2024-02-12T20:20:07.423467058Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 20:20:07.908377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2781220031.mount: Deactivated successfully. Feb 12 20:20:07.913581 env[1123]: time="2024-02-12T20:20:07.913545600Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:07.915268 env[1123]: time="2024-02-12T20:20:07.915220666Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:07.916760 env[1123]: time="2024-02-12T20:20:07.916724489Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:07.918047 env[1123]: time="2024-02-12T20:20:07.918020518Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:07.918376 env[1123]: time="2024-02-12T20:20:07.918350933Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 12 20:20:07.927960 env[1123]: time="2024-02-12T20:20:07.927929468Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 12 20:20:08.826028 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 20:20:08.826209 systemd[1]: Stopped kubelet.service. Feb 12 20:20:08.827541 systemd[1]: Started kubelet.service. Feb 12 20:20:08.885407 kubelet[1468]: E0212 20:20:08.885341 1468 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 12 20:20:08.889507 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:20:08.889657 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:20:09.153779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1822574318.mount: Deactivated successfully. Feb 12 20:20:13.853736 env[1123]: time="2024-02-12T20:20:13.853670985Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:13.864784 env[1123]: time="2024-02-12T20:20:13.864725944Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:13.866777 env[1123]: time="2024-02-12T20:20:13.866725554Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:13.868337 env[1123]: time="2024-02-12T20:20:13.868311234Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:13.868826 env[1123]: time="2024-02-12T20:20:13.868798618Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Feb 12 20:20:13.878186 env[1123]: time="2024-02-12T20:20:13.878146144Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 12 20:20:14.406005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1693831822.mount: Deactivated successfully. Feb 12 20:20:15.101074 env[1123]: time="2024-02-12T20:20:15.100999737Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:15.102918 env[1123]: time="2024-02-12T20:20:15.102867297Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:15.104319 env[1123]: time="2024-02-12T20:20:15.104290840Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:15.105506 env[1123]: time="2024-02-12T20:20:15.105458529Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:15.105907 env[1123]: time="2024-02-12T20:20:15.105878650Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 12 20:20:17.120144 systemd[1]: Stopped kubelet.service. Feb 12 20:20:17.133618 systemd[1]: Reloading. Feb 12 20:20:17.200826 /usr/lib/systemd/system-generators/torcx-generator[1584]: time="2024-02-12T20:20:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:20:17.200862 /usr/lib/systemd/system-generators/torcx-generator[1584]: time="2024-02-12T20:20:17Z" level=info msg="torcx already run" Feb 12 20:20:17.259273 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:20:17.259295 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:20:17.277685 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:20:17.348410 systemd[1]: Started kubelet.service. Feb 12 20:20:17.391795 kubelet[1625]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:20:17.391795 kubelet[1625]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 20:20:17.391795 kubelet[1625]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:20:17.391795 kubelet[1625]: I0212 20:20:17.391755 1625 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:20:17.592894 kubelet[1625]: I0212 20:20:17.592846 1625 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 20:20:17.592894 kubelet[1625]: I0212 20:20:17.592880 1625 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:20:17.593228 kubelet[1625]: I0212 20:20:17.593203 1625 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 20:20:17.598888 kubelet[1625]: E0212 20:20:17.598860 1625 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.33:6443: connect: connection refused Feb 12 20:20:17.599260 kubelet[1625]: I0212 20:20:17.599197 1625 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:20:17.602896 kubelet[1625]: I0212 20:20:17.602874 1625 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:20:17.603110 kubelet[1625]: I0212 20:20:17.603090 1625 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:20:17.603179 kubelet[1625]: I0212 20:20:17.603161 1625 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:20:17.603266 kubelet[1625]: I0212 20:20:17.603182 1625 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:20:17.603266 kubelet[1625]: I0212 20:20:17.603193 1625 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 20:20:17.603318 kubelet[1625]: I0212 20:20:17.603271 1625 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:20:17.605458 kubelet[1625]: I0212 20:20:17.605435 1625 kubelet.go:405] "Attempting to sync node with API server" Feb 12 20:20:17.605458 kubelet[1625]: I0212 20:20:17.605457 1625 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:20:17.605534 kubelet[1625]: I0212 20:20:17.605474 1625 kubelet.go:309] "Adding apiserver pod source" Feb 12 20:20:17.605534 kubelet[1625]: I0212 20:20:17.605494 1625 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:20:17.606110 kubelet[1625]: I0212 20:20:17.606085 1625 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:20:17.606110 kubelet[1625]: W0212 20:20:17.606088 1625 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Feb 12 20:20:17.606183 kubelet[1625]: E0212 20:20:17.606129 1625 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Feb 12 20:20:17.606300 kubelet[1625]: W0212 20:20:17.606252 1625 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Feb 12 20:20:17.606363 kubelet[1625]: E0212 20:20:17.606307 1625 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Feb 12 20:20:17.606389 kubelet[1625]: W0212 20:20:17.606368 1625 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:20:17.606723 kubelet[1625]: I0212 20:20:17.606703 1625 server.go:1168] "Started kubelet" Feb 12 20:20:17.607097 kubelet[1625]: E0212 20:20:17.606977 1625 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b3370e25cb0de4", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 20, 17, 606684132, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 20, 17, 606684132, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.33:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.33:6443: connect: connection refused'(may retry after sleeping) Feb 12 20:20:17.607097 kubelet[1625]: I0212 20:20:17.607067 1625 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 20:20:17.607289 kubelet[1625]: I0212 20:20:17.607274 1625 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:20:17.607694 kubelet[1625]: E0212 20:20:17.607671 1625 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:20:17.607743 kubelet[1625]: E0212 20:20:17.607699 1625 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:20:17.608161 kubelet[1625]: I0212 20:20:17.608145 1625 server.go:461] "Adding debug handlers to kubelet server" Feb 12 20:20:17.609531 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 12 20:20:17.609655 kubelet[1625]: I0212 20:20:17.609598 1625 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:20:17.610014 kubelet[1625]: I0212 20:20:17.609739 1625 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 20:20:17.610014 kubelet[1625]: E0212 20:20:17.609754 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:20:17.610014 kubelet[1625]: I0212 20:20:17.609822 1625 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 20:20:17.610014 kubelet[1625]: E0212 20:20:17.610004 1625 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="200ms" Feb 12 20:20:17.610130 kubelet[1625]: W0212 20:20:17.610054 1625 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Feb 12 20:20:17.610130 kubelet[1625]: E0212 20:20:17.610087 1625 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Feb 12 20:20:17.620876 kubelet[1625]: I0212 20:20:17.620836 1625 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:20:17.621667 kubelet[1625]: I0212 20:20:17.621638 1625 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:20:17.621667 kubelet[1625]: I0212 20:20:17.621663 1625 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 20:20:17.621667 kubelet[1625]: I0212 20:20:17.621681 1625 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 20:20:17.621828 kubelet[1625]: E0212 20:20:17.621724 1625 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 20:20:17.622253 kubelet[1625]: W0212 20:20:17.622227 1625 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Feb 12 20:20:17.622371 kubelet[1625]: E0212 20:20:17.622354 1625 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Feb 12 20:20:17.630609 kubelet[1625]: I0212 20:20:17.630576 1625 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:20:17.630609 kubelet[1625]: I0212 20:20:17.630594 1625 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:20:17.630609 kubelet[1625]: I0212 20:20:17.630606 1625 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:20:17.633383 kubelet[1625]: I0212 20:20:17.633363 1625 policy_none.go:49] "None policy: Start" Feb 12 20:20:17.633848 kubelet[1625]: I0212 20:20:17.633835 1625 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:20:17.633848 kubelet[1625]: I0212 20:20:17.633849 1625 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:20:17.638900 systemd[1]: Created slice kubepods.slice. Feb 12 20:20:17.641952 systemd[1]: Created slice kubepods-burstable.slice. Feb 12 20:20:17.644472 systemd[1]: Created slice kubepods-besteffort.slice. Feb 12 20:20:17.650517 kubelet[1625]: I0212 20:20:17.650492 1625 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:20:17.651101 kubelet[1625]: I0212 20:20:17.650679 1625 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:20:17.651101 kubelet[1625]: E0212 20:20:17.651062 1625 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 12 20:20:17.711553 kubelet[1625]: I0212 20:20:17.711534 1625 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:20:17.711879 kubelet[1625]: E0212 20:20:17.711856 1625 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Feb 12 20:20:17.721938 kubelet[1625]: I0212 20:20:17.721924 1625 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:20:17.722656 kubelet[1625]: I0212 20:20:17.722643 1625 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:20:17.723193 kubelet[1625]: I0212 20:20:17.723152 1625 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:20:17.727153 systemd[1]: Created slice kubepods-burstable-pod2b0e94b38682f4e439413801d3cc54db.slice. Feb 12 20:20:17.741061 systemd[1]: Created slice kubepods-burstable-pode954b954a4bf574640f39374fb7f6e3d.slice. Feb 12 20:20:17.744593 systemd[1]: Created slice kubepods-burstable-pod7709ea05d7cdf82b0d7e594b61a10331.slice. Feb 12 20:20:17.810668 kubelet[1625]: E0212 20:20:17.810628 1625 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="400ms" Feb 12 20:20:17.811629 kubelet[1625]: I0212 20:20:17.811607 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e954b954a4bf574640f39374fb7f6e3d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e954b954a4bf574640f39374fb7f6e3d\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:20:17.811682 kubelet[1625]: I0212 20:20:17.811663 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e954b954a4bf574640f39374fb7f6e3d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e954b954a4bf574640f39374fb7f6e3d\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:20:17.811729 kubelet[1625]: I0212 20:20:17.811716 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:20:17.811768 kubelet[1625]: I0212 20:20:17.811752 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:20:17.811842 kubelet[1625]: I0212 20:20:17.811819 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:20:17.811868 kubelet[1625]: I0212 20:20:17.811861 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b0e94b38682f4e439413801d3cc54db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2b0e94b38682f4e439413801d3cc54db\") " pod="kube-system/kube-scheduler-localhost" Feb 12 20:20:17.811892 kubelet[1625]: I0212 20:20:17.811880 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:20:17.811935 kubelet[1625]: I0212 20:20:17.811919 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:20:17.811982 kubelet[1625]: I0212 20:20:17.811969 1625 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e954b954a4bf574640f39374fb7f6e3d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e954b954a4bf574640f39374fb7f6e3d\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:20:17.912908 kubelet[1625]: I0212 20:20:17.912827 1625 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:20:17.913109 kubelet[1625]: E0212 20:20:17.913097 1625 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Feb 12 20:20:18.039915 kubelet[1625]: E0212 20:20:18.039886 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:18.040604 env[1123]: time="2024-02-12T20:20:18.040562035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2b0e94b38682f4e439413801d3cc54db,Namespace:kube-system,Attempt:0,}" Feb 12 20:20:18.043699 kubelet[1625]: E0212 20:20:18.043680 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:18.044168 env[1123]: time="2024-02-12T20:20:18.044135906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e954b954a4bf574640f39374fb7f6e3d,Namespace:kube-system,Attempt:0,}" Feb 12 20:20:18.046326 kubelet[1625]: E0212 20:20:18.046301 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:18.046608 env[1123]: time="2024-02-12T20:20:18.046576657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7709ea05d7cdf82b0d7e594b61a10331,Namespace:kube-system,Attempt:0,}" Feb 12 20:20:18.212045 kubelet[1625]: E0212 20:20:18.211937 1625 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.33:6443: connect: connection refused" interval="800ms" Feb 12 20:20:18.314406 kubelet[1625]: I0212 20:20:18.314371 1625 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:20:18.314689 kubelet[1625]: E0212 20:20:18.314669 1625 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.33:6443/api/v1/nodes\": dial tcp 10.0.0.33:6443: connect: connection refused" node="localhost" Feb 12 20:20:18.475726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2292716411.mount: Deactivated successfully. Feb 12 20:20:18.480885 env[1123]: time="2024-02-12T20:20:18.480840655Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:18.481709 env[1123]: time="2024-02-12T20:20:18.481671155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:18.482558 env[1123]: time="2024-02-12T20:20:18.482517029Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:18.486730 env[1123]: time="2024-02-12T20:20:18.486696855Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:18.488580 env[1123]: time="2024-02-12T20:20:18.488552404Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:18.489639 env[1123]: time="2024-02-12T20:20:18.489608724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:18.491401 env[1123]: time="2024-02-12T20:20:18.491377705Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:18.492294 env[1123]: time="2024-02-12T20:20:18.492275333Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:18.493542 env[1123]: time="2024-02-12T20:20:18.493523863Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:18.495913 env[1123]: time="2024-02-12T20:20:18.495890471Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:18.496601 env[1123]: time="2024-02-12T20:20:18.496580795Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:18.497936 env[1123]: time="2024-02-12T20:20:18.497895117Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:18.521044 env[1123]: time="2024-02-12T20:20:18.519888794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:20:18.521044 env[1123]: time="2024-02-12T20:20:18.519926397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:20:18.521044 env[1123]: time="2024-02-12T20:20:18.519936443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:20:18.521044 env[1123]: time="2024-02-12T20:20:18.520099269Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a3940f54448f98e99ac3b497491264a82b7f9cf5994bca49a9519ed4448b19e pid=1674 runtime=io.containerd.runc.v2 Feb 12 20:20:18.521412 env[1123]: time="2024-02-12T20:20:18.521348080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:20:18.521459 env[1123]: time="2024-02-12T20:20:18.521424430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:20:18.521484 env[1123]: time="2024-02-12T20:20:18.521453634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:20:18.521668 env[1123]: time="2024-02-12T20:20:18.521626616Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d1ee3979699a55f22568cc53895c93658c86f7ab8a1c6c644eb09317faab5c3 pid=1671 runtime=io.containerd.runc.v2 Feb 12 20:20:18.526585 env[1123]: time="2024-02-12T20:20:18.526431616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:20:18.526585 env[1123]: time="2024-02-12T20:20:18.526461371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:20:18.526585 env[1123]: time="2024-02-12T20:20:18.526471548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:20:18.526865 env[1123]: time="2024-02-12T20:20:18.526790938Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4786b76575cc6aa0758cfb7f5341d7792d9f58a293560b0c8e76a887a6aaf45 pid=1701 runtime=io.containerd.runc.v2 Feb 12 20:20:18.532209 systemd[1]: Started cri-containerd-5d1ee3979699a55f22568cc53895c93658c86f7ab8a1c6c644eb09317faab5c3.scope. Feb 12 20:20:18.542700 systemd[1]: Started cri-containerd-c4786b76575cc6aa0758cfb7f5341d7792d9f58a293560b0c8e76a887a6aaf45.scope. Feb 12 20:20:18.545211 systemd[1]: Started cri-containerd-8a3940f54448f98e99ac3b497491264a82b7f9cf5994bca49a9519ed4448b19e.scope. Feb 12 20:20:18.574013 env[1123]: time="2024-02-12T20:20:18.573943116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7709ea05d7cdf82b0d7e594b61a10331,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d1ee3979699a55f22568cc53895c93658c86f7ab8a1c6c644eb09317faab5c3\"" Feb 12 20:20:18.575620 kubelet[1625]: E0212 20:20:18.574650 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:18.579208 env[1123]: time="2024-02-12T20:20:18.579170239Z" level=info msg="CreateContainer within sandbox \"5d1ee3979699a55f22568cc53895c93658c86f7ab8a1c6c644eb09317faab5c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 20:20:18.585732 env[1123]: time="2024-02-12T20:20:18.585691224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e954b954a4bf574640f39374fb7f6e3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4786b76575cc6aa0758cfb7f5341d7792d9f58a293560b0c8e76a887a6aaf45\"" Feb 12 20:20:18.586289 kubelet[1625]: E0212 20:20:18.586269 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:18.588710 env[1123]: time="2024-02-12T20:20:18.588680887Z" level=info msg="CreateContainer within sandbox \"c4786b76575cc6aa0758cfb7f5341d7792d9f58a293560b0c8e76a887a6aaf45\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 20:20:18.595107 env[1123]: time="2024-02-12T20:20:18.595073557Z" level=info msg="CreateContainer within sandbox \"5d1ee3979699a55f22568cc53895c93658c86f7ab8a1c6c644eb09317faab5c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"15ac910630f72a646a489b2c835e9e6849ddc6467ad8a6d1249e4e8240b90369\"" Feb 12 20:20:18.595735 env[1123]: time="2024-02-12T20:20:18.595705444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:2b0e94b38682f4e439413801d3cc54db,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a3940f54448f98e99ac3b497491264a82b7f9cf5994bca49a9519ed4448b19e\"" Feb 12 20:20:18.596134 env[1123]: time="2024-02-12T20:20:18.596102319Z" level=info msg="StartContainer for \"15ac910630f72a646a489b2c835e9e6849ddc6467ad8a6d1249e4e8240b90369\"" Feb 12 20:20:18.596301 kubelet[1625]: E0212 20:20:18.596282 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:18.598138 env[1123]: time="2024-02-12T20:20:18.598097101Z" level=info msg="CreateContainer within sandbox \"8a3940f54448f98e99ac3b497491264a82b7f9cf5994bca49a9519ed4448b19e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 20:20:18.606590 env[1123]: time="2024-02-12T20:20:18.606542898Z" level=info msg="CreateContainer within sandbox \"c4786b76575cc6aa0758cfb7f5341d7792d9f58a293560b0c8e76a887a6aaf45\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"be51b24df7f1391c401caccbab54cdac624105bdd3a2811a08caca63007a9126\"" Feb 12 20:20:18.606932 env[1123]: time="2024-02-12T20:20:18.606908071Z" level=info msg="StartContainer for \"be51b24df7f1391c401caccbab54cdac624105bdd3a2811a08caca63007a9126\"" Feb 12 20:20:18.612539 systemd[1]: Started cri-containerd-15ac910630f72a646a489b2c835e9e6849ddc6467ad8a6d1249e4e8240b90369.scope. Feb 12 20:20:18.617634 env[1123]: time="2024-02-12T20:20:18.617575815Z" level=info msg="CreateContainer within sandbox \"8a3940f54448f98e99ac3b497491264a82b7f9cf5994bca49a9519ed4448b19e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c75e21de5228a30fdcf049550e5d71d5d2c8742a147efb60993413270d772ee\"" Feb 12 20:20:18.617914 env[1123]: time="2024-02-12T20:20:18.617886605Z" level=info msg="StartContainer for \"2c75e21de5228a30fdcf049550e5d71d5d2c8742a147efb60993413270d772ee\"" Feb 12 20:20:18.624569 systemd[1]: Started cri-containerd-be51b24df7f1391c401caccbab54cdac624105bdd3a2811a08caca63007a9126.scope. Feb 12 20:20:18.637466 systemd[1]: Started cri-containerd-2c75e21de5228a30fdcf049550e5d71d5d2c8742a147efb60993413270d772ee.scope. Feb 12 20:20:18.661135 env[1123]: time="2024-02-12T20:20:18.661084806Z" level=info msg="StartContainer for \"15ac910630f72a646a489b2c835e9e6849ddc6467ad8a6d1249e4e8240b90369\" returns successfully" Feb 12 20:20:18.671753 env[1123]: time="2024-02-12T20:20:18.671705433Z" level=info msg="StartContainer for \"be51b24df7f1391c401caccbab54cdac624105bdd3a2811a08caca63007a9126\" returns successfully" Feb 12 20:20:18.687610 env[1123]: time="2024-02-12T20:20:18.687565788Z" level=info msg="StartContainer for \"2c75e21de5228a30fdcf049550e5d71d5d2c8742a147efb60993413270d772ee\" returns successfully" Feb 12 20:20:18.728087 kubelet[1625]: W0212 20:20:18.727920 1625 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Feb 12 20:20:18.728087 kubelet[1625]: E0212 20:20:18.728019 1625 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.33:6443: connect: connection refused Feb 12 20:20:19.116335 kubelet[1625]: I0212 20:20:19.115861 1625 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:20:19.632896 kubelet[1625]: E0212 20:20:19.632865 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:19.635204 kubelet[1625]: E0212 20:20:19.635188 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:19.637207 kubelet[1625]: E0212 20:20:19.637192 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:19.995476 kubelet[1625]: E0212 20:20:19.995382 1625 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 12 20:20:20.089847 kubelet[1625]: I0212 20:20:20.089793 1625 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 20:20:20.096441 kubelet[1625]: E0212 20:20:20.096415 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:20:20.197491 kubelet[1625]: E0212 20:20:20.197439 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:20:20.298112 kubelet[1625]: E0212 20:20:20.297996 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:20:20.398552 kubelet[1625]: E0212 20:20:20.398496 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:20:20.499114 kubelet[1625]: E0212 20:20:20.499048 1625 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 12 20:20:20.607452 kubelet[1625]: I0212 20:20:20.607342 1625 apiserver.go:52] "Watching apiserver" Feb 12 20:20:20.609914 kubelet[1625]: I0212 20:20:20.609887 1625 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 20:20:20.626366 kubelet[1625]: I0212 20:20:20.626296 1625 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:20:20.641936 kubelet[1625]: E0212 20:20:20.641910 1625 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Feb 12 20:20:20.641936 kubelet[1625]: E0212 20:20:20.641931 1625 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Feb 12 20:20:20.642334 kubelet[1625]: E0212 20:20:20.642033 1625 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Feb 12 20:20:20.642334 kubelet[1625]: E0212 20:20:20.642149 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:20.642438 kubelet[1625]: E0212 20:20:20.642409 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:20.642438 kubelet[1625]: E0212 20:20:20.642429 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:21.643207 kubelet[1625]: E0212 20:20:21.643157 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:22.229651 kubelet[1625]: E0212 20:20:22.229609 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:22.479493 systemd[1]: Reloading. Feb 12 20:20:22.552352 /usr/lib/systemd/system-generators/torcx-generator[1923]: time="2024-02-12T20:20:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:20:22.552387 /usr/lib/systemd/system-generators/torcx-generator[1923]: time="2024-02-12T20:20:22Z" level=info msg="torcx already run" Feb 12 20:20:22.601292 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:20:22.601306 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:20:22.619966 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:20:22.640845 kubelet[1625]: E0212 20:20:22.640812 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:22.641257 kubelet[1625]: E0212 20:20:22.641240 1625 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:22.708289 systemd[1]: Stopping kubelet.service... Feb 12 20:20:22.724239 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 20:20:22.724408 systemd[1]: Stopped kubelet.service. Feb 12 20:20:22.725826 systemd[1]: Started kubelet.service. Feb 12 20:20:22.772765 kubelet[1964]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:20:22.772765 kubelet[1964]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 12 20:20:22.772765 kubelet[1964]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:20:22.773143 kubelet[1964]: I0212 20:20:22.772785 1964 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:20:22.776204 kubelet[1964]: I0212 20:20:22.776181 1964 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 12 20:20:22.776204 kubelet[1964]: I0212 20:20:22.776199 1964 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:20:22.776414 kubelet[1964]: I0212 20:20:22.776394 1964 server.go:837] "Client rotation is on, will bootstrap in background" Feb 12 20:20:22.779228 kubelet[1964]: I0212 20:20:22.779208 1964 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 20:20:22.780082 kubelet[1964]: I0212 20:20:22.780061 1964 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:20:22.783700 kubelet[1964]: I0212 20:20:22.783685 1964 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:20:22.783894 kubelet[1964]: I0212 20:20:22.783881 1964 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:20:22.783951 kubelet[1964]: I0212 20:20:22.783940 1964 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:20:22.784048 kubelet[1964]: I0212 20:20:22.783956 1964 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:20:22.784048 kubelet[1964]: I0212 20:20:22.783964 1964 container_manager_linux.go:302] "Creating device plugin manager" Feb 12 20:20:22.784048 kubelet[1964]: I0212 20:20:22.784000 1964 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:20:22.789007 kubelet[1964]: I0212 20:20:22.788972 1964 kubelet.go:405] "Attempting to sync node with API server" Feb 12 20:20:22.789087 kubelet[1964]: I0212 20:20:22.789014 1964 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:20:22.789087 kubelet[1964]: I0212 20:20:22.789032 1964 kubelet.go:309] "Adding apiserver pod source" Feb 12 20:20:22.789087 kubelet[1964]: I0212 20:20:22.789048 1964 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:20:22.793596 kubelet[1964]: I0212 20:20:22.793578 1964 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:20:22.794128 kubelet[1964]: I0212 20:20:22.794103 1964 server.go:1168] "Started kubelet" Feb 12 20:20:22.794501 kubelet[1964]: I0212 20:20:22.794482 1964 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:20:22.795371 kubelet[1964]: I0212 20:20:22.795351 1964 server.go:461] "Adding debug handlers to kubelet server" Feb 12 20:20:22.796318 kubelet[1964]: I0212 20:20:22.796304 1964 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:20:22.796889 kubelet[1964]: I0212 20:20:22.796871 1964 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 12 20:20:22.798772 kubelet[1964]: I0212 20:20:22.798754 1964 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 12 20:20:22.798936 kubelet[1964]: I0212 20:20:22.798877 1964 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 12 20:20:22.800375 kubelet[1964]: E0212 20:20:22.800349 1964 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:20:22.800375 kubelet[1964]: E0212 20:20:22.800369 1964 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:20:22.812018 kubelet[1964]: I0212 20:20:22.810678 1964 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:20:22.812018 kubelet[1964]: I0212 20:20:22.811481 1964 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:20:22.812018 kubelet[1964]: I0212 20:20:22.811502 1964 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 12 20:20:22.812018 kubelet[1964]: I0212 20:20:22.811524 1964 kubelet.go:2257] "Starting kubelet main sync loop" Feb 12 20:20:22.812018 kubelet[1964]: E0212 20:20:22.811579 1964 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 20:20:22.822872 sudo[1992]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 12 20:20:22.823046 sudo[1992]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 12 20:20:22.852489 kubelet[1964]: I0212 20:20:22.852463 1964 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:20:22.852489 kubelet[1964]: I0212 20:20:22.852488 1964 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:20:22.852677 kubelet[1964]: I0212 20:20:22.852508 1964 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:20:22.852677 kubelet[1964]: I0212 20:20:22.852659 1964 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 20:20:22.852677 kubelet[1964]: I0212 20:20:22.852673 1964 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 20:20:22.852770 kubelet[1964]: I0212 20:20:22.852680 1964 policy_none.go:49] "None policy: Start" Feb 12 20:20:22.853332 kubelet[1964]: I0212 20:20:22.853320 1964 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:20:22.853421 kubelet[1964]: I0212 20:20:22.853408 1964 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:20:22.853582 kubelet[1964]: I0212 20:20:22.853570 1964 state_mem.go:75] "Updated machine memory state" Feb 12 20:20:22.856970 kubelet[1964]: I0212 20:20:22.856953 1964 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:20:22.857147 kubelet[1964]: I0212 20:20:22.857131 1964 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:20:22.901791 kubelet[1964]: I0212 20:20:22.901766 1964 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 12 20:20:22.906523 kubelet[1964]: I0212 20:20:22.906491 1964 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 12 20:20:22.906602 kubelet[1964]: I0212 20:20:22.906543 1964 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 12 20:20:22.912843 kubelet[1964]: I0212 20:20:22.912815 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:20:22.913200 kubelet[1964]: I0212 20:20:22.913184 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:20:22.913363 kubelet[1964]: I0212 20:20:22.913347 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:20:22.921227 kubelet[1964]: E0212 20:20:22.921206 1964 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 12 20:20:22.922271 kubelet[1964]: E0212 20:20:22.922256 1964 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 12 20:20:23.100354 kubelet[1964]: I0212 20:20:23.100243 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:20:23.100354 kubelet[1964]: I0212 20:20:23.100292 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2b0e94b38682f4e439413801d3cc54db-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"2b0e94b38682f4e439413801d3cc54db\") " pod="kube-system/kube-scheduler-localhost" Feb 12 20:20:23.100354 kubelet[1964]: I0212 20:20:23.100344 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e954b954a4bf574640f39374fb7f6e3d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e954b954a4bf574640f39374fb7f6e3d\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:20:23.100554 kubelet[1964]: I0212 20:20:23.100392 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e954b954a4bf574640f39374fb7f6e3d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e954b954a4bf574640f39374fb7f6e3d\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:20:23.100554 kubelet[1964]: I0212 20:20:23.100412 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:20:23.100554 kubelet[1964]: I0212 20:20:23.100433 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:20:23.100554 kubelet[1964]: I0212 20:20:23.100451 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:20:23.100554 kubelet[1964]: I0212 20:20:23.100467 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e954b954a4bf574640f39374fb7f6e3d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e954b954a4bf574640f39374fb7f6e3d\") " pod="kube-system/kube-apiserver-localhost" Feb 12 20:20:23.100707 kubelet[1964]: I0212 20:20:23.100487 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7709ea05d7cdf82b0d7e594b61a10331-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7709ea05d7cdf82b0d7e594b61a10331\") " pod="kube-system/kube-controller-manager-localhost" Feb 12 20:20:23.222726 kubelet[1964]: E0212 20:20:23.222697 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:23.222932 kubelet[1964]: E0212 20:20:23.222799 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:23.223063 kubelet[1964]: E0212 20:20:23.222857 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:23.267373 sudo[1992]: pam_unix(sudo:session): session closed for user root Feb 12 20:20:23.790134 kubelet[1964]: I0212 20:20:23.790083 1964 apiserver.go:52] "Watching apiserver" Feb 12 20:20:23.799422 kubelet[1964]: I0212 20:20:23.799401 1964 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 12 20:20:23.803545 kubelet[1964]: I0212 20:20:23.803525 1964 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:20:23.823938 kubelet[1964]: E0212 20:20:23.823917 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:23.824016 kubelet[1964]: E0212 20:20:23.823958 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:23.824184 kubelet[1964]: E0212 20:20:23.824164 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:23.955526 kubelet[1964]: I0212 20:20:23.955486 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.955423077 podCreationTimestamp="2024-02-12 20:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:20:23.955282404 +0000 UTC m=+1.226404134" watchObservedRunningTime="2024-02-12 20:20:23.955423077 +0000 UTC m=+1.226544808" Feb 12 20:20:23.955682 kubelet[1964]: I0212 20:20:23.955588 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.955569216 podCreationTimestamp="2024-02-12 20:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:20:23.839541207 +0000 UTC m=+1.110662947" watchObservedRunningTime="2024-02-12 20:20:23.955569216 +0000 UTC m=+1.226690946" Feb 12 20:20:24.000977 kubelet[1964]: I0212 20:20:24.000950 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.000919926 podCreationTimestamp="2024-02-12 20:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:20:24.000652596 +0000 UTC m=+1.271774326" watchObservedRunningTime="2024-02-12 20:20:24.000919926 +0000 UTC m=+1.272041666" Feb 12 20:20:24.351190 sudo[1210]: pam_unix(sudo:session): session closed for user root Feb 12 20:20:24.352285 sshd[1207]: pam_unix(sshd:session): session closed for user core Feb 12 20:20:24.354109 systemd[1]: sshd@4-10.0.0.33:22-10.0.0.1:49256.service: Deactivated successfully. Feb 12 20:20:24.354798 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:20:24.354933 systemd[1]: session-5.scope: Consumed 3.516s CPU time. Feb 12 20:20:24.355358 systemd-logind[1104]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:20:24.356048 systemd-logind[1104]: Removed session 5. Feb 12 20:20:24.825396 kubelet[1964]: E0212 20:20:24.825289 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:24.825778 kubelet[1964]: E0212 20:20:24.825571 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:24.825907 kubelet[1964]: E0212 20:20:24.825888 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:31.140528 kubelet[1964]: E0212 20:20:31.140488 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:31.834931 kubelet[1964]: E0212 20:20:31.834897 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:32.836266 kubelet[1964]: E0212 20:20:32.836232 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:33.966560 kubelet[1964]: E0212 20:20:33.966501 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:34.060637 kubelet[1964]: E0212 20:20:34.060601 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:37.171538 kubelet[1964]: I0212 20:20:37.171506 1964 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 20:20:37.172356 env[1123]: time="2024-02-12T20:20:37.172311040Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:20:37.172672 kubelet[1964]: I0212 20:20:37.172645 1964 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 20:20:38.086095 kubelet[1964]: I0212 20:20:38.086045 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:20:38.091033 systemd[1]: Created slice kubepods-besteffort-podf5bd346c_d2f2_4074_afce_f20cf27cd029.slice. Feb 12 20:20:38.097613 kubelet[1964]: I0212 20:20:38.097581 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5bd346c-d2f2-4074-afce-f20cf27cd029-xtables-lock\") pod \"kube-proxy-zfjvb\" (UID: \"f5bd346c-d2f2-4074-afce-f20cf27cd029\") " pod="kube-system/kube-proxy-zfjvb" Feb 12 20:20:38.097756 kubelet[1964]: I0212 20:20:38.097621 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5bd346c-d2f2-4074-afce-f20cf27cd029-lib-modules\") pod \"kube-proxy-zfjvb\" (UID: \"f5bd346c-d2f2-4074-afce-f20cf27cd029\") " pod="kube-system/kube-proxy-zfjvb" Feb 12 20:20:38.097756 kubelet[1964]: I0212 20:20:38.097642 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f5bd346c-d2f2-4074-afce-f20cf27cd029-kube-proxy\") pod \"kube-proxy-zfjvb\" (UID: \"f5bd346c-d2f2-4074-afce-f20cf27cd029\") " pod="kube-system/kube-proxy-zfjvb" Feb 12 20:20:38.097756 kubelet[1964]: I0212 20:20:38.097659 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9zzd\" (UniqueName: \"kubernetes.io/projected/f5bd346c-d2f2-4074-afce-f20cf27cd029-kube-api-access-s9zzd\") pod \"kube-proxy-zfjvb\" (UID: \"f5bd346c-d2f2-4074-afce-f20cf27cd029\") " pod="kube-system/kube-proxy-zfjvb" Feb 12 20:20:38.100559 kubelet[1964]: I0212 20:20:38.100543 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:20:38.104512 systemd[1]: Created slice kubepods-burstable-pod846f3880_60fc_42fd_afc6_5f43a83ac338.slice. Feb 12 20:20:38.109022 kubelet[1964]: W0212 20:20:38.108975 1964 reflector.go:533] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 12 20:20:38.109153 kubelet[1964]: E0212 20:20:38.109138 1964 reflector.go:148] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 12 20:20:38.109388 kubelet[1964]: W0212 20:20:38.109375 1964 reflector.go:533] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 12 20:20:38.109466 kubelet[1964]: E0212 20:20:38.109453 1964 reflector.go:148] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 12 20:20:38.109680 kubelet[1964]: W0212 20:20:38.109667 1964 reflector.go:533] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 12 20:20:38.109758 kubelet[1964]: E0212 20:20:38.109744 1964 reflector.go:148] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 12 20:20:38.185407 kubelet[1964]: I0212 20:20:38.185361 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:20:38.190554 systemd[1]: Created slice kubepods-besteffort-podc20c170c_72e6_47ca_b7de_228c52a91eb5.slice. Feb 12 20:20:38.197904 kubelet[1964]: I0212 20:20:38.197865 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-host-proc-sys-net\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.198140 kubelet[1964]: I0212 20:20:38.198122 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-cni-path\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.198203 kubelet[1964]: I0212 20:20:38.198160 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-host-proc-sys-kernel\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.198203 kubelet[1964]: I0212 20:20:38.198188 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-bpf-maps\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.198373 kubelet[1964]: I0212 20:20:38.198214 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggcxq\" (UniqueName: \"kubernetes.io/projected/846f3880-60fc-42fd-afc6-5f43a83ac338-kube-api-access-ggcxq\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.198373 kubelet[1964]: I0212 20:20:38.198352 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzhmr\" (UniqueName: \"kubernetes.io/projected/c20c170c-72e6-47ca-b7de-228c52a91eb5-kube-api-access-gzhmr\") pod \"cilium-operator-574c4bb98d-mrv62\" (UID: \"c20c170c-72e6-47ca-b7de-228c52a91eb5\") " pod="kube-system/cilium-operator-574c4bb98d-mrv62" Feb 12 20:20:38.198420 kubelet[1964]: I0212 20:20:38.198374 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-lib-modules\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.198420 kubelet[1964]: I0212 20:20:38.198389 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-xtables-lock\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.198420 kubelet[1964]: I0212 20:20:38.198408 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/846f3880-60fc-42fd-afc6-5f43a83ac338-hubble-tls\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.198485 kubelet[1964]: I0212 20:20:38.198425 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c20c170c-72e6-47ca-b7de-228c52a91eb5-cilium-config-path\") pod \"cilium-operator-574c4bb98d-mrv62\" (UID: \"c20c170c-72e6-47ca-b7de-228c52a91eb5\") " pod="kube-system/cilium-operator-574c4bb98d-mrv62" Feb 12 20:20:38.198485 kubelet[1964]: I0212 20:20:38.198448 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-run\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.198485 kubelet[1964]: I0212 20:20:38.198464 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-etc-cni-netd\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.198485 kubelet[1964]: I0212 20:20:38.198485 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/846f3880-60fc-42fd-afc6-5f43a83ac338-clustermesh-secrets\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.198573 kubelet[1964]: I0212 20:20:38.198502 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-config-path\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.198573 kubelet[1964]: I0212 20:20:38.198522 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-hostproc\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.198573 kubelet[1964]: I0212 20:20:38.198537 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-cgroup\") pod \"cilium-98qjs\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " pod="kube-system/cilium-98qjs" Feb 12 20:20:38.399591 kubelet[1964]: E0212 20:20:38.399469 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:38.400280 env[1123]: time="2024-02-12T20:20:38.400236019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfjvb,Uid:f5bd346c-d2f2-4074-afce-f20cf27cd029,Namespace:kube-system,Attempt:0,}" Feb 12 20:20:38.416366 env[1123]: time="2024-02-12T20:20:38.416297910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:20:38.416366 env[1123]: time="2024-02-12T20:20:38.416336641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:20:38.416366 env[1123]: time="2024-02-12T20:20:38.416346955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:20:38.416621 env[1123]: time="2024-02-12T20:20:38.416562367Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da4f2c6a131e153d6e8361d74c8796a6119f54ef742640751778837068cb8c09 pid=2060 runtime=io.containerd.runc.v2 Feb 12 20:20:38.430556 systemd[1]: Started cri-containerd-da4f2c6a131e153d6e8361d74c8796a6119f54ef742640751778837068cb8c09.scope. Feb 12 20:20:38.449448 env[1123]: time="2024-02-12T20:20:38.449396587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zfjvb,Uid:f5bd346c-d2f2-4074-afce-f20cf27cd029,Namespace:kube-system,Attempt:0,} returns sandbox id \"da4f2c6a131e153d6e8361d74c8796a6119f54ef742640751778837068cb8c09\"" Feb 12 20:20:38.450141 kubelet[1964]: E0212 20:20:38.450118 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:38.452376 env[1123]: time="2024-02-12T20:20:38.452347572Z" level=info msg="CreateContainer within sandbox \"da4f2c6a131e153d6e8361d74c8796a6119f54ef742640751778837068cb8c09\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:20:38.474858 env[1123]: time="2024-02-12T20:20:38.474807631Z" level=info msg="CreateContainer within sandbox \"da4f2c6a131e153d6e8361d74c8796a6119f54ef742640751778837068cb8c09\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2307e4e67fe65b6179053c18cdb18d815b24d037ce4397ef932a65826d68c9c6\"" Feb 12 20:20:38.476851 env[1123]: time="2024-02-12T20:20:38.476826525Z" level=info msg="StartContainer for \"2307e4e67fe65b6179053c18cdb18d815b24d037ce4397ef932a65826d68c9c6\"" Feb 12 20:20:38.490305 systemd[1]: Started cri-containerd-2307e4e67fe65b6179053c18cdb18d815b24d037ce4397ef932a65826d68c9c6.scope. Feb 12 20:20:38.518426 env[1123]: time="2024-02-12T20:20:38.518364132Z" level=info msg="StartContainer for \"2307e4e67fe65b6179053c18cdb18d815b24d037ce4397ef932a65826d68c9c6\" returns successfully" Feb 12 20:20:38.847825 kubelet[1964]: E0212 20:20:38.847571 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:38.854581 kubelet[1964]: I0212 20:20:38.854545 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zfjvb" podStartSLOduration=0.854502492 podCreationTimestamp="2024-02-12 20:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:20:38.854293264 +0000 UTC m=+16.125414995" watchObservedRunningTime="2024-02-12 20:20:38.854502492 +0000 UTC m=+16.125624222" Feb 12 20:20:39.134943 update_engine[1107]: I0212 20:20:39.134819 1107 update_attempter.cc:509] Updating boot flags... Feb 12 20:20:39.216065 systemd[1]: run-containerd-runc-k8s.io-da4f2c6a131e153d6e8361d74c8796a6119f54ef742640751778837068cb8c09-runc.s64K3I.mount: Deactivated successfully. Feb 12 20:20:39.299634 kubelet[1964]: E0212 20:20:39.299608 1964 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:20:39.299909 kubelet[1964]: E0212 20:20:39.299680 1964 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-config-path podName:846f3880-60fc-42fd-afc6-5f43a83ac338 nodeName:}" failed. No retries permitted until 2024-02-12 20:20:39.799659708 +0000 UTC m=+17.070781438 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-config-path") pod "cilium-98qjs" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338") : failed to sync configmap cache: timed out waiting for the condition Feb 12 20:20:39.299909 kubelet[1964]: E0212 20:20:39.299707 1964 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:20:39.299909 kubelet[1964]: E0212 20:20:39.299753 1964 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c20c170c-72e6-47ca-b7de-228c52a91eb5-cilium-config-path podName:c20c170c-72e6-47ca-b7de-228c52a91eb5 nodeName:}" failed. No retries permitted until 2024-02-12 20:20:39.799740886 +0000 UTC m=+17.070862616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/c20c170c-72e6-47ca-b7de-228c52a91eb5-cilium-config-path") pod "cilium-operator-574c4bb98d-mrv62" (UID: "c20c170c-72e6-47ca-b7de-228c52a91eb5") : failed to sync configmap cache: timed out waiting for the condition Feb 12 20:20:39.299909 kubelet[1964]: E0212 20:20:39.299753 1964 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 12 20:20:39.299909 kubelet[1964]: E0212 20:20:39.299771 1964 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-98qjs: failed to sync secret cache: timed out waiting for the condition Feb 12 20:20:39.300157 kubelet[1964]: E0212 20:20:39.299837 1964 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/846f3880-60fc-42fd-afc6-5f43a83ac338-hubble-tls podName:846f3880-60fc-42fd-afc6-5f43a83ac338 nodeName:}" failed. No retries permitted until 2024-02-12 20:20:39.799823867 +0000 UTC m=+17.070945697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/846f3880-60fc-42fd-afc6-5f43a83ac338-hubble-tls") pod "cilium-98qjs" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338") : failed to sync secret cache: timed out waiting for the condition Feb 12 20:20:39.907675 kubelet[1964]: E0212 20:20:39.907646 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:39.908011 env[1123]: time="2024-02-12T20:20:39.907965155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-98qjs,Uid:846f3880-60fc-42fd-afc6-5f43a83ac338,Namespace:kube-system,Attempt:0,}" Feb 12 20:20:39.993348 kubelet[1964]: E0212 20:20:39.993306 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:39.993811 env[1123]: time="2024-02-12T20:20:39.993776429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-mrv62,Uid:c20c170c-72e6-47ca-b7de-228c52a91eb5,Namespace:kube-system,Attempt:0,}" Feb 12 20:20:40.193113 env[1123]: time="2024-02-12T20:20:40.192963155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:20:40.193113 env[1123]: time="2024-02-12T20:20:40.193047829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:20:40.193113 env[1123]: time="2024-02-12T20:20:40.193058544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:20:40.193676 env[1123]: time="2024-02-12T20:20:40.193628496Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b pid=2277 runtime=io.containerd.runc.v2 Feb 12 20:20:40.196390 env[1123]: time="2024-02-12T20:20:40.194529627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:20:40.196390 env[1123]: time="2024-02-12T20:20:40.194581937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:20:40.196390 env[1123]: time="2024-02-12T20:20:40.194591639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:20:40.196390 env[1123]: time="2024-02-12T20:20:40.194778547Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/856432d2c6c6c97996922f7b077c810742c2c6a05fb48c82982a958500dfd1a2 pid=2284 runtime=io.containerd.runc.v2 Feb 12 20:20:40.205085 systemd[1]: Started cri-containerd-856432d2c6c6c97996922f7b077c810742c2c6a05fb48c82982a958500dfd1a2.scope. Feb 12 20:20:40.208645 systemd[1]: Started cri-containerd-50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b.scope. Feb 12 20:20:40.233491 env[1123]: time="2024-02-12T20:20:40.233435500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-98qjs,Uid:846f3880-60fc-42fd-afc6-5f43a83ac338,Namespace:kube-system,Attempt:0,} returns sandbox id \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\"" Feb 12 20:20:40.234017 kubelet[1964]: E0212 20:20:40.233980 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:40.237088 env[1123]: time="2024-02-12T20:20:40.237054005Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 12 20:20:40.251317 env[1123]: time="2024-02-12T20:20:40.251282048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-mrv62,Uid:c20c170c-72e6-47ca-b7de-228c52a91eb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"856432d2c6c6c97996922f7b077c810742c2c6a05fb48c82982a958500dfd1a2\"" Feb 12 20:20:40.252125 kubelet[1964]: E0212 20:20:40.252104 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:41.951401 systemd[1]: Started sshd@5-10.0.0.33:22-10.0.0.1:59026.service. Feb 12 20:20:41.988632 sshd[2347]: Accepted publickey for core from 10.0.0.1 port 59026 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:20:41.989835 sshd[2347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:20:41.993956 systemd-logind[1104]: New session 6 of user core. Feb 12 20:20:41.995229 systemd[1]: Started session-6.scope. Feb 12 20:20:42.112913 sshd[2347]: pam_unix(sshd:session): session closed for user core Feb 12 20:20:42.115620 systemd[1]: sshd@5-10.0.0.33:22-10.0.0.1:59026.service: Deactivated successfully. Feb 12 20:20:42.116496 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 20:20:42.117201 systemd-logind[1104]: Session 6 logged out. Waiting for processes to exit. Feb 12 20:20:42.118006 systemd-logind[1104]: Removed session 6. Feb 12 20:20:47.119362 systemd[1]: Started sshd@6-10.0.0.33:22-10.0.0.1:35022.service. Feb 12 20:20:47.151586 sshd[2364]: Accepted publickey for core from 10.0.0.1 port 35022 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:20:47.152063 sshd[2364]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:20:47.155827 systemd-logind[1104]: New session 7 of user core. Feb 12 20:20:47.156878 systemd[1]: Started session-7.scope. Feb 12 20:20:47.289709 sshd[2364]: pam_unix(sshd:session): session closed for user core Feb 12 20:20:47.291958 systemd[1]: sshd@6-10.0.0.33:22-10.0.0.1:35022.service: Deactivated successfully. Feb 12 20:20:47.292791 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 20:20:47.293365 systemd-logind[1104]: Session 7 logged out. Waiting for processes to exit. Feb 12 20:20:47.294030 systemd-logind[1104]: Removed session 7. Feb 12 20:20:48.401404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080581292.mount: Deactivated successfully. Feb 12 20:20:52.296426 systemd[1]: Started sshd@7-10.0.0.33:22-10.0.0.1:35034.service. Feb 12 20:20:52.587615 sshd[2378]: Accepted publickey for core from 10.0.0.1 port 35034 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:20:52.588860 sshd[2378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:20:52.612880 systemd-logind[1104]: New session 8 of user core. Feb 12 20:20:52.613734 systemd[1]: Started session-8.scope. Feb 12 20:20:52.681336 env[1123]: time="2024-02-12T20:20:52.681281431Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:52.683842 env[1123]: time="2024-02-12T20:20:52.683790935Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:52.685814 env[1123]: time="2024-02-12T20:20:52.685776348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:52.686553 env[1123]: time="2024-02-12T20:20:52.686508790Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 12 20:20:52.688097 env[1123]: time="2024-02-12T20:20:52.688010108Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 12 20:20:52.689134 env[1123]: time="2024-02-12T20:20:52.689097732Z" level=info msg="CreateContainer within sandbox \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:20:52.702228 env[1123]: time="2024-02-12T20:20:52.702174049Z" level=info msg="CreateContainer within sandbox \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1\"" Feb 12 20:20:52.704456 env[1123]: time="2024-02-12T20:20:52.702887380Z" level=info msg="StartContainer for \"c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1\"" Feb 12 20:20:52.718477 systemd[1]: Started cri-containerd-c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1.scope. Feb 12 20:20:52.745175 env[1123]: time="2024-02-12T20:20:52.745131737Z" level=info msg="StartContainer for \"c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1\" returns successfully" Feb 12 20:20:52.752388 systemd[1]: cri-containerd-c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1.scope: Deactivated successfully. Feb 12 20:20:52.759172 sshd[2378]: pam_unix(sshd:session): session closed for user core Feb 12 20:20:52.762297 systemd[1]: sshd@7-10.0.0.33:22-10.0.0.1:35034.service: Deactivated successfully. Feb 12 20:20:52.763151 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 20:20:52.764146 systemd-logind[1104]: Session 8 logged out. Waiting for processes to exit. Feb 12 20:20:52.765027 systemd-logind[1104]: Removed session 8. Feb 12 20:20:52.980496 kubelet[1964]: E0212 20:20:52.872022 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:53.409344 env[1123]: time="2024-02-12T20:20:53.409281785Z" level=info msg="shim disconnected" id=c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1 Feb 12 20:20:53.409344 env[1123]: time="2024-02-12T20:20:53.409331100Z" level=warning msg="cleaning up after shim disconnected" id=c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1 namespace=k8s.io Feb 12 20:20:53.409344 env[1123]: time="2024-02-12T20:20:53.409339858Z" level=info msg="cleaning up dead shim" Feb 12 20:20:53.416307 env[1123]: time="2024-02-12T20:20:53.416259186Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:20:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2443 runtime=io.containerd.runc.v2\n" Feb 12 20:20:53.698789 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1-rootfs.mount: Deactivated successfully. Feb 12 20:20:53.875897 kubelet[1964]: E0212 20:20:53.875025 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:53.878777 env[1123]: time="2024-02-12T20:20:53.878735176Z" level=info msg="CreateContainer within sandbox \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:20:53.894407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2878297565.mount: Deactivated successfully. Feb 12 20:20:53.894957 env[1123]: time="2024-02-12T20:20:53.894919859Z" level=info msg="CreateContainer within sandbox \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487\"" Feb 12 20:20:53.895534 env[1123]: time="2024-02-12T20:20:53.895475701Z" level=info msg="StartContainer for \"0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487\"" Feb 12 20:20:53.910244 systemd[1]: Started cri-containerd-0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487.scope. Feb 12 20:20:53.930823 env[1123]: time="2024-02-12T20:20:53.930754551Z" level=info msg="StartContainer for \"0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487\" returns successfully" Feb 12 20:20:53.939723 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:20:53.939978 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:20:53.940154 systemd[1]: Stopping systemd-sysctl.service... Feb 12 20:20:53.941967 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:20:53.942203 systemd[1]: cri-containerd-0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487.scope: Deactivated successfully. Feb 12 20:20:53.953382 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:20:53.968711 env[1123]: time="2024-02-12T20:20:53.968649787Z" level=info msg="shim disconnected" id=0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487 Feb 12 20:20:53.968711 env[1123]: time="2024-02-12T20:20:53.968703551Z" level=warning msg="cleaning up after shim disconnected" id=0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487 namespace=k8s.io Feb 12 20:20:53.968711 env[1123]: time="2024-02-12T20:20:53.968714845Z" level=info msg="cleaning up dead shim" Feb 12 20:20:53.979247 env[1123]: time="2024-02-12T20:20:53.979177684Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:20:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2507 runtime=io.containerd.runc.v2\n" Feb 12 20:20:54.699100 systemd[1]: run-containerd-runc-k8s.io-0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487-runc.p1Vn1U.mount: Deactivated successfully. Feb 12 20:20:54.699211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487-rootfs.mount: Deactivated successfully. Feb 12 20:20:54.876754 kubelet[1964]: E0212 20:20:54.876727 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:54.879686 env[1123]: time="2024-02-12T20:20:54.879460843Z" level=info msg="CreateContainer within sandbox \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:20:54.892867 env[1123]: time="2024-02-12T20:20:54.892796857Z" level=info msg="CreateContainer within sandbox \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01\"" Feb 12 20:20:54.893371 env[1123]: time="2024-02-12T20:20:54.893345288Z" level=info msg="StartContainer for \"bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01\"" Feb 12 20:20:54.917219 systemd[1]: Started cri-containerd-bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01.scope. Feb 12 20:20:54.942091 systemd[1]: cri-containerd-bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01.scope: Deactivated successfully. Feb 12 20:20:54.943981 env[1123]: time="2024-02-12T20:20:54.943943548Z" level=info msg="StartContainer for \"bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01\" returns successfully" Feb 12 20:20:55.135405 env[1123]: time="2024-02-12T20:20:55.135337087Z" level=info msg="shim disconnected" id=bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01 Feb 12 20:20:55.135405 env[1123]: time="2024-02-12T20:20:55.135389386Z" level=warning msg="cleaning up after shim disconnected" id=bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01 namespace=k8s.io Feb 12 20:20:55.135405 env[1123]: time="2024-02-12T20:20:55.135404629Z" level=info msg="cleaning up dead shim" Feb 12 20:20:55.141404 env[1123]: time="2024-02-12T20:20:55.141352400Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:20:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2565 runtime=io.containerd.runc.v2\n" Feb 12 20:20:55.151682 env[1123]: time="2024-02-12T20:20:55.151647066Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:55.153329 env[1123]: time="2024-02-12T20:20:55.153296456Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:55.154630 env[1123]: time="2024-02-12T20:20:55.154604362Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:20:55.154980 env[1123]: time="2024-02-12T20:20:55.154957341Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 12 20:20:55.156587 env[1123]: time="2024-02-12T20:20:55.156549280Z" level=info msg="CreateContainer within sandbox \"856432d2c6c6c97996922f7b077c810742c2c6a05fb48c82982a958500dfd1a2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 12 20:20:55.166490 env[1123]: time="2024-02-12T20:20:55.166447336Z" level=info msg="CreateContainer within sandbox \"856432d2c6c6c97996922f7b077c810742c2c6a05fb48c82982a958500dfd1a2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\"" Feb 12 20:20:55.166882 env[1123]: time="2024-02-12T20:20:55.166807870Z" level=info msg="StartContainer for \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\"" Feb 12 20:20:55.179540 systemd[1]: Started cri-containerd-dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da.scope. Feb 12 20:20:55.201027 env[1123]: time="2024-02-12T20:20:55.200223252Z" level=info msg="StartContainer for \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\" returns successfully" Feb 12 20:20:55.700090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01-rootfs.mount: Deactivated successfully. Feb 12 20:20:55.879199 kubelet[1964]: E0212 20:20:55.879163 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:55.881148 kubelet[1964]: E0212 20:20:55.881121 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:55.882697 env[1123]: time="2024-02-12T20:20:55.882659131Z" level=info msg="CreateContainer within sandbox \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:20:56.181937 kubelet[1964]: I0212 20:20:56.181898 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-mrv62" podStartSLOduration=3.280137011 podCreationTimestamp="2024-02-12 20:20:38 +0000 UTC" firstStartedPulling="2024-02-12 20:20:40.253476016 +0000 UTC m=+17.524597746" lastFinishedPulling="2024-02-12 20:20:55.155196611 +0000 UTC m=+32.426318341" observedRunningTime="2024-02-12 20:20:56.181585588 +0000 UTC m=+33.452707318" watchObservedRunningTime="2024-02-12 20:20:56.181857606 +0000 UTC m=+33.452979336" Feb 12 20:20:56.194691 env[1123]: time="2024-02-12T20:20:56.194631001Z" level=info msg="CreateContainer within sandbox \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229\"" Feb 12 20:20:56.195209 env[1123]: time="2024-02-12T20:20:56.195181358Z" level=info msg="StartContainer for \"57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229\"" Feb 12 20:20:56.212496 systemd[1]: Started cri-containerd-57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229.scope. Feb 12 20:20:56.230648 systemd[1]: cri-containerd-57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229.scope: Deactivated successfully. Feb 12 20:20:56.232524 env[1123]: time="2024-02-12T20:20:56.232443489Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod846f3880_60fc_42fd_afc6_5f43a83ac338.slice/cri-containerd-57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229.scope/memory.events\": no such file or directory" Feb 12 20:20:56.234878 env[1123]: time="2024-02-12T20:20:56.234848082Z" level=info msg="StartContainer for \"57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229\" returns successfully" Feb 12 20:20:56.258002 env[1123]: time="2024-02-12T20:20:56.257935247Z" level=info msg="shim disconnected" id=57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229 Feb 12 20:20:56.258002 env[1123]: time="2024-02-12T20:20:56.257980901Z" level=warning msg="cleaning up after shim disconnected" id=57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229 namespace=k8s.io Feb 12 20:20:56.258229 env[1123]: time="2024-02-12T20:20:56.258014491Z" level=info msg="cleaning up dead shim" Feb 12 20:20:56.264608 env[1123]: time="2024-02-12T20:20:56.264558462Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:20:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2661 runtime=io.containerd.runc.v2\n" Feb 12 20:20:56.699445 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229-rootfs.mount: Deactivated successfully. Feb 12 20:20:56.885248 kubelet[1964]: E0212 20:20:56.885216 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:56.885700 kubelet[1964]: E0212 20:20:56.885216 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:56.886967 env[1123]: time="2024-02-12T20:20:56.886927506Z" level=info msg="CreateContainer within sandbox \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:20:56.901335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2283811543.mount: Deactivated successfully. Feb 12 20:20:56.904813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3675687554.mount: Deactivated successfully. Feb 12 20:20:56.909289 env[1123]: time="2024-02-12T20:20:56.909245166Z" level=info msg="CreateContainer within sandbox \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\"" Feb 12 20:20:56.909800 env[1123]: time="2024-02-12T20:20:56.909755870Z" level=info msg="StartContainer for \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\"" Feb 12 20:20:56.922138 systemd[1]: Started cri-containerd-ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26.scope. Feb 12 20:20:56.944013 env[1123]: time="2024-02-12T20:20:56.943933362Z" level=info msg="StartContainer for \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\" returns successfully" Feb 12 20:20:57.051075 kubelet[1964]: I0212 20:20:57.050968 1964 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:20:57.067539 kubelet[1964]: I0212 20:20:57.067496 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:20:57.070261 kubelet[1964]: I0212 20:20:57.069961 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:20:57.073966 systemd[1]: Created slice kubepods-burstable-pod68ce24c3_cfa4_44bd_80a7_9190035358b1.slice. Feb 12 20:20:57.079331 systemd[1]: Created slice kubepods-burstable-podd415cf4e_cb56_44f1_9017_08aabb67a1a1.slice. Feb 12 20:20:57.219847 kubelet[1964]: I0212 20:20:57.219809 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sxfd\" (UniqueName: \"kubernetes.io/projected/d415cf4e-cb56-44f1-9017-08aabb67a1a1-kube-api-access-2sxfd\") pod \"coredns-5d78c9869d-2gxgt\" (UID: \"d415cf4e-cb56-44f1-9017-08aabb67a1a1\") " pod="kube-system/coredns-5d78c9869d-2gxgt" Feb 12 20:20:57.219995 kubelet[1964]: I0212 20:20:57.219868 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk5b4\" (UniqueName: \"kubernetes.io/projected/68ce24c3-cfa4-44bd-80a7-9190035358b1-kube-api-access-jk5b4\") pod \"coredns-5d78c9869d-vhhgb\" (UID: \"68ce24c3-cfa4-44bd-80a7-9190035358b1\") " pod="kube-system/coredns-5d78c9869d-vhhgb" Feb 12 20:20:57.219995 kubelet[1964]: I0212 20:20:57.219891 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68ce24c3-cfa4-44bd-80a7-9190035358b1-config-volume\") pod \"coredns-5d78c9869d-vhhgb\" (UID: \"68ce24c3-cfa4-44bd-80a7-9190035358b1\") " pod="kube-system/coredns-5d78c9869d-vhhgb" Feb 12 20:20:57.219995 kubelet[1964]: I0212 20:20:57.219923 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d415cf4e-cb56-44f1-9017-08aabb67a1a1-config-volume\") pod \"coredns-5d78c9869d-2gxgt\" (UID: \"d415cf4e-cb56-44f1-9017-08aabb67a1a1\") " pod="kube-system/coredns-5d78c9869d-2gxgt" Feb 12 20:20:57.377159 kubelet[1964]: E0212 20:20:57.377108 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:57.377674 env[1123]: time="2024-02-12T20:20:57.377628250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-vhhgb,Uid:68ce24c3-cfa4-44bd-80a7-9190035358b1,Namespace:kube-system,Attempt:0,}" Feb 12 20:20:57.383301 kubelet[1964]: E0212 20:20:57.383250 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:57.383819 env[1123]: time="2024-02-12T20:20:57.383765961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-2gxgt,Uid:d415cf4e-cb56-44f1-9017-08aabb67a1a1,Namespace:kube-system,Attempt:0,}" Feb 12 20:20:57.764851 systemd[1]: Started sshd@8-10.0.0.33:22-10.0.0.1:48450.service. Feb 12 20:20:57.797172 sshd[2848]: Accepted publickey for core from 10.0.0.1 port 48450 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:20:57.798239 sshd[2848]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:20:57.801241 systemd-logind[1104]: New session 9 of user core. Feb 12 20:20:57.802102 systemd[1]: Started session-9.scope. Feb 12 20:20:57.888665 kubelet[1964]: E0212 20:20:57.888634 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:57.900026 kubelet[1964]: I0212 20:20:57.899967 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-98qjs" podStartSLOduration=7.448744713 podCreationTimestamp="2024-02-12 20:20:38 +0000 UTC" firstStartedPulling="2024-02-12 20:20:40.236365952 +0000 UTC m=+17.507487692" lastFinishedPulling="2024-02-12 20:20:52.687554844 +0000 UTC m=+29.958676574" observedRunningTime="2024-02-12 20:20:57.899789275 +0000 UTC m=+35.170911005" watchObservedRunningTime="2024-02-12 20:20:57.899933595 +0000 UTC m=+35.171055325" Feb 12 20:20:57.909625 sshd[2848]: pam_unix(sshd:session): session closed for user core Feb 12 20:20:57.911848 systemd[1]: sshd@8-10.0.0.33:22-10.0.0.1:48450.service: Deactivated successfully. Feb 12 20:20:57.912507 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 20:20:57.913196 systemd-logind[1104]: Session 9 logged out. Waiting for processes to exit. Feb 12 20:20:57.913769 systemd-logind[1104]: Removed session 9. Feb 12 20:20:58.859688 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 12 20:20:58.859810 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 12 20:20:58.859493 systemd-networkd[1017]: cilium_host: Link UP Feb 12 20:20:58.859656 systemd-networkd[1017]: cilium_net: Link UP Feb 12 20:20:58.859805 systemd-networkd[1017]: cilium_net: Gained carrier Feb 12 20:20:58.859939 systemd-networkd[1017]: cilium_host: Gained carrier Feb 12 20:20:58.892831 kubelet[1964]: E0212 20:20:58.892582 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:20:58.925153 systemd-networkd[1017]: cilium_vxlan: Link UP Feb 12 20:20:58.925162 systemd-networkd[1017]: cilium_vxlan: Gained carrier Feb 12 20:20:59.086100 systemd-networkd[1017]: cilium_host: Gained IPv6LL Feb 12 20:20:59.100033 kernel: NET: Registered PF_ALG protocol family Feb 12 20:20:59.102148 systemd-networkd[1017]: cilium_net: Gained IPv6LL Feb 12 20:20:59.583478 systemd-networkd[1017]: lxc_health: Link UP Feb 12 20:20:59.592361 systemd-networkd[1017]: lxc_health: Gained carrier Feb 12 20:20:59.593008 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:20:59.895703 kubelet[1964]: E0212 20:20:59.895574 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:00.006694 systemd-networkd[1017]: lxc497f62396c7a: Link UP Feb 12 20:21:00.017084 kernel: eth0: renamed from tmp0716f Feb 12 20:21:00.022103 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:21:00.022182 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc497f62396c7a: link becomes ready Feb 12 20:21:00.022337 systemd-networkd[1017]: lxc497f62396c7a: Gained carrier Feb 12 20:21:00.023719 systemd-networkd[1017]: lxc3fa540d3ee9f: Link UP Feb 12 20:21:00.029014 kernel: eth0: renamed from tmp1952b Feb 12 20:21:00.044395 systemd-networkd[1017]: lxc3fa540d3ee9f: Gained carrier Feb 12 20:21:00.046009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3fa540d3ee9f: link becomes ready Feb 12 20:21:00.896944 kubelet[1964]: E0212 20:21:00.896901 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:00.899135 systemd-networkd[1017]: cilium_vxlan: Gained IPv6LL Feb 12 20:21:00.899415 systemd-networkd[1017]: lxc_health: Gained IPv6LL Feb 12 20:21:01.730447 systemd-networkd[1017]: lxc3fa540d3ee9f: Gained IPv6LL Feb 12 20:21:01.919279 systemd-networkd[1017]: lxc497f62396c7a: Gained IPv6LL Feb 12 20:21:02.936973 systemd[1]: Started sshd@9-10.0.0.33:22-10.0.0.1:48456.service. Feb 12 20:21:03.134444 sshd[3233]: Accepted publickey for core from 10.0.0.1 port 48456 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:03.135676 sshd[3233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:03.182725 systemd-logind[1104]: New session 10 of user core. Feb 12 20:21:03.184102 systemd[1]: Started session-10.scope. Feb 12 20:21:03.630365 sshd[3233]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:03.639910 systemd[1]: sshd@9-10.0.0.33:22-10.0.0.1:48456.service: Deactivated successfully. Feb 12 20:21:03.640728 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 20:21:03.650892 systemd[1]: Started sshd@10-10.0.0.33:22-10.0.0.1:48466.service. Feb 12 20:21:03.655412 systemd-logind[1104]: Session 10 logged out. Waiting for processes to exit. Feb 12 20:21:03.662647 systemd-logind[1104]: Removed session 10. Feb 12 20:21:03.718472 sshd[3262]: Accepted publickey for core from 10.0.0.1 port 48466 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:03.720041 sshd[3262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:03.741418 systemd[1]: Started session-11.scope. Feb 12 20:21:03.742698 systemd-logind[1104]: New session 11 of user core. Feb 12 20:21:04.585088 sshd[3262]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:04.585798 systemd[1]: Started sshd@11-10.0.0.33:22-10.0.0.1:45086.service. Feb 12 20:21:04.591148 systemd[1]: sshd@10-10.0.0.33:22-10.0.0.1:48466.service: Deactivated successfully. Feb 12 20:21:04.592161 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 20:21:04.597694 systemd-logind[1104]: Session 11 logged out. Waiting for processes to exit. Feb 12 20:21:04.599134 systemd-logind[1104]: Removed session 11. Feb 12 20:21:04.631427 sshd[3272]: Accepted publickey for core from 10.0.0.1 port 45086 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:04.632853 sshd[3272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:04.636940 systemd-logind[1104]: New session 12 of user core. Feb 12 20:21:04.638093 systemd[1]: Started session-12.scope. Feb 12 20:21:04.815655 sshd[3272]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:04.847735 systemd[1]: sshd@11-10.0.0.33:22-10.0.0.1:45086.service: Deactivated successfully. Feb 12 20:21:04.848452 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 20:21:04.849321 systemd-logind[1104]: Session 12 logged out. Waiting for processes to exit. Feb 12 20:21:04.850038 systemd-logind[1104]: Removed session 12. Feb 12 20:21:04.980541 env[1123]: time="2024-02-12T20:21:04.980479638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:21:04.980874 env[1123]: time="2024-02-12T20:21:04.980543228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:21:04.980874 env[1123]: time="2024-02-12T20:21:04.980554651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:21:04.980874 env[1123]: time="2024-02-12T20:21:04.980716561Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0716f98e78f706b72a8284ae1f0fa8cc40a67eddfe7c8c8ef4cc0321b3378e0f pid=3304 runtime=io.containerd.runc.v2 Feb 12 20:21:04.987095 env[1123]: time="2024-02-12T20:21:04.982281624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:21:04.987095 env[1123]: time="2024-02-12T20:21:04.982335774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:21:04.987095 env[1123]: time="2024-02-12T20:21:04.982345664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:21:04.987095 env[1123]: time="2024-02-12T20:21:04.982541964Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1952babb0c3396cc48d1c859de6287fa183b7e932f574185b99d1efc823ca027 pid=3310 runtime=io.containerd.runc.v2 Feb 12 20:21:04.998222 systemd[1]: Started cri-containerd-1952babb0c3396cc48d1c859de6287fa183b7e932f574185b99d1efc823ca027.scope. Feb 12 20:21:05.000740 systemd[1]: Started cri-containerd-0716f98e78f706b72a8284ae1f0fa8cc40a67eddfe7c8c8ef4cc0321b3378e0f.scope. Feb 12 20:21:05.010890 systemd-resolved[1061]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:21:05.013407 systemd-resolved[1061]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 12 20:21:05.035420 env[1123]: time="2024-02-12T20:21:05.035370587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-vhhgb,Uid:68ce24c3-cfa4-44bd-80a7-9190035358b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"0716f98e78f706b72a8284ae1f0fa8cc40a67eddfe7c8c8ef4cc0321b3378e0f\"" Feb 12 20:21:05.036313 kubelet[1964]: E0212 20:21:05.036288 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:05.038171 env[1123]: time="2024-02-12T20:21:05.038119873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-2gxgt,Uid:d415cf4e-cb56-44f1-9017-08aabb67a1a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"1952babb0c3396cc48d1c859de6287fa183b7e932f574185b99d1efc823ca027\"" Feb 12 20:21:05.039734 env[1123]: time="2024-02-12T20:21:05.039700991Z" level=info msg="CreateContainer within sandbox \"0716f98e78f706b72a8284ae1f0fa8cc40a67eddfe7c8c8ef4cc0321b3378e0f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:21:05.041831 kubelet[1964]: E0212 20:21:05.041774 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:05.044029 env[1123]: time="2024-02-12T20:21:05.043893674Z" level=info msg="CreateContainer within sandbox \"1952babb0c3396cc48d1c859de6287fa183b7e932f574185b99d1efc823ca027\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:21:05.060380 env[1123]: time="2024-02-12T20:21:05.060307740Z" level=info msg="CreateContainer within sandbox \"1952babb0c3396cc48d1c859de6287fa183b7e932f574185b99d1efc823ca027\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01658dc7a814ffbb0ca4a2717e9018145ff986bce5e96b3fca78d24b54ecf358\"" Feb 12 20:21:05.060809 env[1123]: time="2024-02-12T20:21:05.060775031Z" level=info msg="StartContainer for \"01658dc7a814ffbb0ca4a2717e9018145ff986bce5e96b3fca78d24b54ecf358\"" Feb 12 20:21:05.061310 env[1123]: time="2024-02-12T20:21:05.061270860Z" level=info msg="CreateContainer within sandbox \"0716f98e78f706b72a8284ae1f0fa8cc40a67eddfe7c8c8ef4cc0321b3378e0f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"683c2c343ad009c9a791f14fa8dcaaf1090266196e125fe7a1b53a6bc580d9f4\"" Feb 12 20:21:05.061688 env[1123]: time="2024-02-12T20:21:05.061659020Z" level=info msg="StartContainer for \"683c2c343ad009c9a791f14fa8dcaaf1090266196e125fe7a1b53a6bc580d9f4\"" Feb 12 20:21:05.075816 systemd[1]: Started cri-containerd-01658dc7a814ffbb0ca4a2717e9018145ff986bce5e96b3fca78d24b54ecf358.scope. Feb 12 20:21:05.079416 systemd[1]: Started cri-containerd-683c2c343ad009c9a791f14fa8dcaaf1090266196e125fe7a1b53a6bc580d9f4.scope. Feb 12 20:21:05.103422 env[1123]: time="2024-02-12T20:21:05.103300358Z" level=info msg="StartContainer for \"683c2c343ad009c9a791f14fa8dcaaf1090266196e125fe7a1b53a6bc580d9f4\" returns successfully" Feb 12 20:21:05.104220 env[1123]: time="2024-02-12T20:21:05.104179758Z" level=info msg="StartContainer for \"01658dc7a814ffbb0ca4a2717e9018145ff986bce5e96b3fca78d24b54ecf358\" returns successfully" Feb 12 20:21:05.770624 kubelet[1964]: I0212 20:21:05.770578 1964 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 20:21:05.771321 kubelet[1964]: E0212 20:21:05.771285 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:05.912556 kubelet[1964]: E0212 20:21:05.912520 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:05.914699 kubelet[1964]: E0212 20:21:05.914669 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:05.914862 kubelet[1964]: E0212 20:21:05.914761 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:05.921979 kubelet[1964]: I0212 20:21:05.921950 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-2gxgt" podStartSLOduration=27.921917254 podCreationTimestamp="2024-02-12 20:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:21:05.921597293 +0000 UTC m=+43.192719023" watchObservedRunningTime="2024-02-12 20:21:05.921917254 +0000 UTC m=+43.193038974" Feb 12 20:21:05.931208 kubelet[1964]: I0212 20:21:05.931172 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-vhhgb" podStartSLOduration=27.931129402 podCreationTimestamp="2024-02-12 20:20:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:21:05.930643884 +0000 UTC m=+43.201765614" watchObservedRunningTime="2024-02-12 20:21:05.931129402 +0000 UTC m=+43.202251132" Feb 12 20:21:05.984168 systemd[1]: run-containerd-runc-k8s.io-1952babb0c3396cc48d1c859de6287fa183b7e932f574185b99d1efc823ca027-runc.yuor3Z.mount: Deactivated successfully. Feb 12 20:21:06.915864 kubelet[1964]: E0212 20:21:06.915833 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:07.378500 kubelet[1964]: E0212 20:21:07.378474 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:07.917539 kubelet[1964]: E0212 20:21:07.917505 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:07.917971 kubelet[1964]: E0212 20:21:07.917555 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:09.819672 systemd[1]: Started sshd@12-10.0.0.33:22-10.0.0.1:45100.service. Feb 12 20:21:09.850206 sshd[3462]: Accepted publickey for core from 10.0.0.1 port 45100 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:09.851129 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:09.854133 systemd-logind[1104]: New session 13 of user core. Feb 12 20:21:09.854815 systemd[1]: Started session-13.scope. Feb 12 20:21:09.954092 sshd[3462]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:09.956034 systemd[1]: sshd@12-10.0.0.33:22-10.0.0.1:45100.service: Deactivated successfully. Feb 12 20:21:09.956784 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 20:21:09.957383 systemd-logind[1104]: Session 13 logged out. Waiting for processes to exit. Feb 12 20:21:09.957971 systemd-logind[1104]: Removed session 13. Feb 12 20:21:14.958800 systemd[1]: Started sshd@13-10.0.0.33:22-10.0.0.1:53888.service. Feb 12 20:21:14.988793 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 53888 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:14.990029 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:14.993178 systemd-logind[1104]: New session 14 of user core. Feb 12 20:21:14.994145 systemd[1]: Started session-14.scope. Feb 12 20:21:15.098216 sshd[3476]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:15.100954 systemd[1]: sshd@13-10.0.0.33:22-10.0.0.1:53888.service: Deactivated successfully. Feb 12 20:21:15.101444 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 20:21:15.101930 systemd-logind[1104]: Session 14 logged out. Waiting for processes to exit. Feb 12 20:21:15.102857 systemd[1]: Started sshd@14-10.0.0.33:22-10.0.0.1:53896.service. Feb 12 20:21:15.106549 systemd-logind[1104]: Removed session 14. Feb 12 20:21:15.132340 sshd[3490]: Accepted publickey for core from 10.0.0.1 port 53896 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:15.133299 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:15.136334 systemd-logind[1104]: New session 15 of user core. Feb 12 20:21:15.137385 systemd[1]: Started session-15.scope. Feb 12 20:21:15.307958 sshd[3490]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:15.310775 systemd[1]: sshd@14-10.0.0.33:22-10.0.0.1:53896.service: Deactivated successfully. Feb 12 20:21:15.311304 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 20:21:15.311904 systemd-logind[1104]: Session 15 logged out. Waiting for processes to exit. Feb 12 20:21:15.312900 systemd[1]: Started sshd@15-10.0.0.33:22-10.0.0.1:53904.service. Feb 12 20:21:15.313664 systemd-logind[1104]: Removed session 15. Feb 12 20:21:15.343748 sshd[3501]: Accepted publickey for core from 10.0.0.1 port 53904 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:15.344725 sshd[3501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:15.347604 systemd-logind[1104]: New session 16 of user core. Feb 12 20:21:15.348400 systemd[1]: Started session-16.scope. Feb 12 20:21:16.128761 sshd[3501]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:16.131850 systemd[1]: sshd@15-10.0.0.33:22-10.0.0.1:53904.service: Deactivated successfully. Feb 12 20:21:16.132498 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 20:21:16.133265 systemd-logind[1104]: Session 16 logged out. Waiting for processes to exit. Feb 12 20:21:16.134540 systemd[1]: Started sshd@16-10.0.0.33:22-10.0.0.1:53914.service. Feb 12 20:21:16.135928 systemd-logind[1104]: Removed session 16. Feb 12 20:21:16.164277 sshd[3519]: Accepted publickey for core from 10.0.0.1 port 53914 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:16.165462 sshd[3519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:16.168668 systemd-logind[1104]: New session 17 of user core. Feb 12 20:21:16.169485 systemd[1]: Started session-17.scope. Feb 12 20:21:16.661081 sshd[3519]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:16.663970 systemd[1]: sshd@16-10.0.0.33:22-10.0.0.1:53914.service: Deactivated successfully. Feb 12 20:21:16.664605 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 20:21:16.665257 systemd-logind[1104]: Session 17 logged out. Waiting for processes to exit. Feb 12 20:21:16.666384 systemd[1]: Started sshd@17-10.0.0.33:22-10.0.0.1:53916.service. Feb 12 20:21:16.667023 systemd-logind[1104]: Removed session 17. Feb 12 20:21:16.695934 sshd[3531]: Accepted publickey for core from 10.0.0.1 port 53916 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:16.697219 sshd[3531]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:16.700662 systemd-logind[1104]: New session 18 of user core. Feb 12 20:21:16.701548 systemd[1]: Started session-18.scope. Feb 12 20:21:16.803065 sshd[3531]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:16.805027 systemd[1]: sshd@17-10.0.0.33:22-10.0.0.1:53916.service: Deactivated successfully. Feb 12 20:21:16.805787 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 20:21:16.806671 systemd-logind[1104]: Session 18 logged out. Waiting for processes to exit. Feb 12 20:21:16.807339 systemd-logind[1104]: Removed session 18. Feb 12 20:21:21.806977 systemd[1]: Started sshd@18-10.0.0.33:22-10.0.0.1:53922.service. Feb 12 20:21:21.839469 sshd[3544]: Accepted publickey for core from 10.0.0.1 port 53922 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:21.840778 sshd[3544]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:21.844029 systemd-logind[1104]: New session 19 of user core. Feb 12 20:21:21.844717 systemd[1]: Started session-19.scope. Feb 12 20:21:21.944738 sshd[3544]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:21.947178 systemd[1]: sshd@18-10.0.0.33:22-10.0.0.1:53922.service: Deactivated successfully. Feb 12 20:21:21.947873 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 20:21:21.948623 systemd-logind[1104]: Session 19 logged out. Waiting for processes to exit. Feb 12 20:21:21.949274 systemd-logind[1104]: Removed session 19. Feb 12 20:21:26.949763 systemd[1]: Started sshd@19-10.0.0.33:22-10.0.0.1:59250.service. Feb 12 20:21:26.983241 sshd[3562]: Accepted publickey for core from 10.0.0.1 port 59250 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:26.984907 sshd[3562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:26.988546 systemd-logind[1104]: New session 20 of user core. Feb 12 20:21:26.989416 systemd[1]: Started session-20.scope. Feb 12 20:21:27.103163 sshd[3562]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:27.105582 systemd[1]: sshd@19-10.0.0.33:22-10.0.0.1:59250.service: Deactivated successfully. Feb 12 20:21:27.106318 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 20:21:27.106947 systemd-logind[1104]: Session 20 logged out. Waiting for processes to exit. Feb 12 20:21:27.107641 systemd-logind[1104]: Removed session 20. Feb 12 20:21:32.107875 systemd[1]: Started sshd@20-10.0.0.33:22-10.0.0.1:59258.service. Feb 12 20:21:32.139478 sshd[3575]: Accepted publickey for core from 10.0.0.1 port 59258 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:32.140698 sshd[3575]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:32.144296 systemd-logind[1104]: New session 21 of user core. Feb 12 20:21:32.145294 systemd[1]: Started session-21.scope. Feb 12 20:21:32.258371 sshd[3575]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:32.261069 systemd[1]: sshd@20-10.0.0.33:22-10.0.0.1:59258.service: Deactivated successfully. Feb 12 20:21:32.261744 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 20:21:32.262367 systemd-logind[1104]: Session 21 logged out. Waiting for processes to exit. Feb 12 20:21:32.263052 systemd-logind[1104]: Removed session 21. Feb 12 20:21:37.262135 systemd[1]: Started sshd@21-10.0.0.33:22-10.0.0.1:51396.service. Feb 12 20:21:37.293708 sshd[3590]: Accepted publickey for core from 10.0.0.1 port 51396 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:37.294761 sshd[3590]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:37.297769 systemd-logind[1104]: New session 22 of user core. Feb 12 20:21:37.298504 systemd[1]: Started session-22.scope. Feb 12 20:21:37.395914 sshd[3590]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:37.398617 systemd[1]: sshd@21-10.0.0.33:22-10.0.0.1:51396.service: Deactivated successfully. Feb 12 20:21:37.399189 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 20:21:37.399710 systemd-logind[1104]: Session 22 logged out. Waiting for processes to exit. Feb 12 20:21:37.400773 systemd[1]: Started sshd@22-10.0.0.33:22-10.0.0.1:51400.service. Feb 12 20:21:37.401611 systemd-logind[1104]: Removed session 22. Feb 12 20:21:37.430252 sshd[3603]: Accepted publickey for core from 10.0.0.1 port 51400 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:37.431037 sshd[3603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:37.434007 systemd-logind[1104]: New session 23 of user core. Feb 12 20:21:37.434919 systemd[1]: Started session-23.scope. Feb 12 20:21:38.744695 env[1123]: time="2024-02-12T20:21:38.744639403Z" level=info msg="StopContainer for \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\" with timeout 30 (s)" Feb 12 20:21:38.745069 env[1123]: time="2024-02-12T20:21:38.745012412Z" level=info msg="Stop container \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\" with signal terminated" Feb 12 20:21:38.752653 systemd[1]: cri-containerd-dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da.scope: Deactivated successfully. Feb 12 20:21:38.763900 env[1123]: time="2024-02-12T20:21:38.763832954Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:21:38.770084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da-rootfs.mount: Deactivated successfully. Feb 12 20:21:38.770751 env[1123]: time="2024-02-12T20:21:38.770719439Z" level=info msg="StopContainer for \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\" with timeout 1 (s)" Feb 12 20:21:38.770950 env[1123]: time="2024-02-12T20:21:38.770928225Z" level=info msg="Stop container \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\" with signal terminated" Feb 12 20:21:38.777031 systemd-networkd[1017]: lxc_health: Link DOWN Feb 12 20:21:38.777039 systemd-networkd[1017]: lxc_health: Lost carrier Feb 12 20:21:38.778766 env[1123]: time="2024-02-12T20:21:38.778716255Z" level=info msg="shim disconnected" id=dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da Feb 12 20:21:38.778766 env[1123]: time="2024-02-12T20:21:38.778765626Z" level=warning msg="cleaning up after shim disconnected" id=dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da namespace=k8s.io Feb 12 20:21:38.778914 env[1123]: time="2024-02-12T20:21:38.778773400Z" level=info msg="cleaning up dead shim" Feb 12 20:21:38.784637 env[1123]: time="2024-02-12T20:21:38.784613183Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3659 runtime=io.containerd.runc.v2\n" Feb 12 20:21:38.787263 env[1123]: time="2024-02-12T20:21:38.787236086Z" level=info msg="StopContainer for \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\" returns successfully" Feb 12 20:21:38.787911 env[1123]: time="2024-02-12T20:21:38.787879195Z" level=info msg="StopPodSandbox for \"856432d2c6c6c97996922f7b077c810742c2c6a05fb48c82982a958500dfd1a2\"" Feb 12 20:21:38.787980 env[1123]: time="2024-02-12T20:21:38.787948443Z" level=info msg="Container to stop \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:21:38.789460 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-856432d2c6c6c97996922f7b077c810742c2c6a05fb48c82982a958500dfd1a2-shm.mount: Deactivated successfully. Feb 12 20:21:38.797341 systemd[1]: cri-containerd-856432d2c6c6c97996922f7b077c810742c2c6a05fb48c82982a958500dfd1a2.scope: Deactivated successfully. Feb 12 20:21:38.812321 systemd[1]: cri-containerd-ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26.scope: Deactivated successfully. Feb 12 20:21:38.812574 systemd[1]: cri-containerd-ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26.scope: Consumed 7.217s CPU time. Feb 12 20:21:38.816841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-856432d2c6c6c97996922f7b077c810742c2c6a05fb48c82982a958500dfd1a2-rootfs.mount: Deactivated successfully. Feb 12 20:21:38.823382 env[1123]: time="2024-02-12T20:21:38.823312161Z" level=info msg="shim disconnected" id=856432d2c6c6c97996922f7b077c810742c2c6a05fb48c82982a958500dfd1a2 Feb 12 20:21:38.823382 env[1123]: time="2024-02-12T20:21:38.823367432Z" level=warning msg="cleaning up after shim disconnected" id=856432d2c6c6c97996922f7b077c810742c2c6a05fb48c82982a958500dfd1a2 namespace=k8s.io Feb 12 20:21:38.823636 env[1123]: time="2024-02-12T20:21:38.823389554Z" level=info msg="cleaning up dead shim" Feb 12 20:21:38.829382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26-rootfs.mount: Deactivated successfully. Feb 12 20:21:38.831197 env[1123]: time="2024-02-12T20:21:38.831146988Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3704 runtime=io.containerd.runc.v2\n" Feb 12 20:21:38.831493 env[1123]: time="2024-02-12T20:21:38.831447212Z" level=info msg="TearDown network for sandbox \"856432d2c6c6c97996922f7b077c810742c2c6a05fb48c82982a958500dfd1a2\" successfully" Feb 12 20:21:38.831493 env[1123]: time="2024-02-12T20:21:38.831476747Z" level=info msg="StopPodSandbox for \"856432d2c6c6c97996922f7b077c810742c2c6a05fb48c82982a958500dfd1a2\" returns successfully" Feb 12 20:21:38.836847 env[1123]: time="2024-02-12T20:21:38.836813932Z" level=info msg="shim disconnected" id=ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26 Feb 12 20:21:38.837031 env[1123]: time="2024-02-12T20:21:38.836999955Z" level=warning msg="cleaning up after shim disconnected" id=ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26 namespace=k8s.io Feb 12 20:21:38.837031 env[1123]: time="2024-02-12T20:21:38.837016777Z" level=info msg="cleaning up dead shim" Feb 12 20:21:38.845848 env[1123]: time="2024-02-12T20:21:38.845804744Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3716 runtime=io.containerd.runc.v2\n" Feb 12 20:21:38.848559 env[1123]: time="2024-02-12T20:21:38.848529465Z" level=info msg="StopContainer for \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\" returns successfully" Feb 12 20:21:38.848972 env[1123]: time="2024-02-12T20:21:38.848948017Z" level=info msg="StopPodSandbox for \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\"" Feb 12 20:21:38.849061 env[1123]: time="2024-02-12T20:21:38.849024689Z" level=info msg="Container to stop \"c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:21:38.849061 env[1123]: time="2024-02-12T20:21:38.849039787Z" level=info msg="Container to stop \"0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:21:38.849061 env[1123]: time="2024-02-12T20:21:38.849048623Z" level=info msg="Container to stop \"bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:21:38.849061 env[1123]: time="2024-02-12T20:21:38.849057930Z" level=info msg="Container to stop \"57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:21:38.849207 env[1123]: time="2024-02-12T20:21:38.849070994Z" level=info msg="Container to stop \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:21:38.853546 systemd[1]: cri-containerd-50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b.scope: Deactivated successfully. Feb 12 20:21:38.875351 env[1123]: time="2024-02-12T20:21:38.875308833Z" level=info msg="shim disconnected" id=50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b Feb 12 20:21:38.875663 env[1123]: time="2024-02-12T20:21:38.875631458Z" level=warning msg="cleaning up after shim disconnected" id=50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b namespace=k8s.io Feb 12 20:21:38.875663 env[1123]: time="2024-02-12T20:21:38.875648309Z" level=info msg="cleaning up dead shim" Feb 12 20:21:38.882170 env[1123]: time="2024-02-12T20:21:38.882099832Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3747 runtime=io.containerd.runc.v2\n" Feb 12 20:21:38.882523 env[1123]: time="2024-02-12T20:21:38.882494230Z" level=info msg="TearDown network for sandbox \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\" successfully" Feb 12 20:21:38.882523 env[1123]: time="2024-02-12T20:21:38.882518766Z" level=info msg="StopPodSandbox for \"50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b\" returns successfully" Feb 12 20:21:38.969813 kubelet[1964]: I0212 20:21:38.969782 1964 scope.go:115] "RemoveContainer" containerID="dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da" Feb 12 20:21:38.971282 env[1123]: time="2024-02-12T20:21:38.971239267Z" level=info msg="RemoveContainer for \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\"" Feb 12 20:21:38.974564 env[1123]: time="2024-02-12T20:21:38.974530453Z" level=info msg="RemoveContainer for \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\" returns successfully" Feb 12 20:21:38.974748 kubelet[1964]: I0212 20:21:38.974728 1964 scope.go:115] "RemoveContainer" containerID="dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da" Feb 12 20:21:38.975391 env[1123]: time="2024-02-12T20:21:38.974931455Z" level=error msg="ContainerStatus for \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\": not found" Feb 12 20:21:38.975447 kubelet[1964]: E0212 20:21:38.975190 1964 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\": not found" containerID="dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da" Feb 12 20:21:38.975447 kubelet[1964]: I0212 20:21:38.975227 1964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da} err="failed to get container status \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd452677f019e4f6d399c686723a5b16209df88dba6fb7c28bd4dc331fc215da\": not found" Feb 12 20:21:38.975447 kubelet[1964]: I0212 20:21:38.975240 1964 scope.go:115] "RemoveContainer" containerID="ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26" Feb 12 20:21:38.976794 env[1123]: time="2024-02-12T20:21:38.976762365Z" level=info msg="RemoveContainer for \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\"" Feb 12 20:21:38.979634 env[1123]: time="2024-02-12T20:21:38.979606457Z" level=info msg="RemoveContainer for \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\" returns successfully" Feb 12 20:21:38.979758 kubelet[1964]: I0212 20:21:38.979734 1964 scope.go:115] "RemoveContainer" containerID="57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229" Feb 12 20:21:38.980532 env[1123]: time="2024-02-12T20:21:38.980507189Z" level=info msg="RemoveContainer for \"57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229\"" Feb 12 20:21:38.983015 env[1123]: time="2024-02-12T20:21:38.982993661Z" level=info msg="RemoveContainer for \"57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229\" returns successfully" Feb 12 20:21:38.983120 kubelet[1964]: I0212 20:21:38.983108 1964 scope.go:115] "RemoveContainer" containerID="bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01" Feb 12 20:21:38.983864 env[1123]: time="2024-02-12T20:21:38.983840625Z" level=info msg="RemoveContainer for \"bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01\"" Feb 12 20:21:38.986205 env[1123]: time="2024-02-12T20:21:38.986176738Z" level=info msg="RemoveContainer for \"bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01\" returns successfully" Feb 12 20:21:38.986294 kubelet[1964]: I0212 20:21:38.986277 1964 scope.go:115] "RemoveContainer" containerID="0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487" Feb 12 20:21:38.987092 env[1123]: time="2024-02-12T20:21:38.987055512Z" level=info msg="RemoveContainer for \"0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487\"" Feb 12 20:21:38.989549 env[1123]: time="2024-02-12T20:21:38.989521174Z" level=info msg="RemoveContainer for \"0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487\" returns successfully" Feb 12 20:21:38.989662 kubelet[1964]: I0212 20:21:38.989640 1964 scope.go:115] "RemoveContainer" containerID="c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1" Feb 12 20:21:38.990465 env[1123]: time="2024-02-12T20:21:38.990430704Z" level=info msg="RemoveContainer for \"c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1\"" Feb 12 20:21:38.994162 env[1123]: time="2024-02-12T20:21:38.994126147Z" level=info msg="RemoveContainer for \"c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1\" returns successfully" Feb 12 20:21:38.994342 kubelet[1964]: I0212 20:21:38.994319 1964 scope.go:115] "RemoveContainer" containerID="ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26" Feb 12 20:21:38.994591 env[1123]: time="2024-02-12T20:21:38.994525895Z" level=error msg="ContainerStatus for \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\": not found" Feb 12 20:21:38.994748 kubelet[1964]: E0212 20:21:38.994681 1964 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\": not found" containerID="ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26" Feb 12 20:21:38.994748 kubelet[1964]: I0212 20:21:38.994717 1964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26} err="failed to get container status \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab63427d5a681ee6171651f34dd3501afc445c446171f0f87f27f24e14f93b26\": not found" Feb 12 20:21:38.994748 kubelet[1964]: I0212 20:21:38.994726 1964 scope.go:115] "RemoveContainer" containerID="57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229" Feb 12 20:21:38.994941 env[1123]: time="2024-02-12T20:21:38.994886010Z" level=error msg="ContainerStatus for \"57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229\": not found" Feb 12 20:21:38.995087 kubelet[1964]: E0212 20:21:38.995059 1964 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229\": not found" containerID="57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229" Feb 12 20:21:38.995087 kubelet[1964]: I0212 20:21:38.995085 1964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229} err="failed to get container status \"57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229\": rpc error: code = NotFound desc = an error occurred when try to find container \"57edf509171dc8b585f98f5f42d49b757538119cb94c64a21f2a9a8710068229\": not found" Feb 12 20:21:38.995157 kubelet[1964]: I0212 20:21:38.995096 1964 scope.go:115] "RemoveContainer" containerID="bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01" Feb 12 20:21:38.995286 env[1123]: time="2024-02-12T20:21:38.995241477Z" level=error msg="ContainerStatus for \"bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01\": not found" Feb 12 20:21:38.995407 kubelet[1964]: E0212 20:21:38.995389 1964 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01\": not found" containerID="bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01" Feb 12 20:21:38.995455 kubelet[1964]: I0212 20:21:38.995419 1964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01} err="failed to get container status \"bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdcc599da10fa14dc55a32e4bb6639805a37ba994c2690b383873d141ee85d01\": not found" Feb 12 20:21:38.995455 kubelet[1964]: I0212 20:21:38.995428 1964 scope.go:115] "RemoveContainer" containerID="0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487" Feb 12 20:21:38.995646 env[1123]: time="2024-02-12T20:21:38.995585392Z" level=error msg="ContainerStatus for \"0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487\": not found" Feb 12 20:21:38.995762 kubelet[1964]: E0212 20:21:38.995731 1964 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487\": not found" containerID="0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487" Feb 12 20:21:38.995762 kubelet[1964]: I0212 20:21:38.995751 1964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487} err="failed to get container status \"0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487\": rpc error: code = NotFound desc = an error occurred when try to find container \"0f56aea294fd724a344d982530adb3be111474f057b39a21e8365663685a5487\": not found" Feb 12 20:21:38.995762 kubelet[1964]: I0212 20:21:38.995760 1964 scope.go:115] "RemoveContainer" containerID="c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1" Feb 12 20:21:38.995969 env[1123]: time="2024-02-12T20:21:38.995932342Z" level=error msg="ContainerStatus for \"c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1\": not found" Feb 12 20:21:38.996136 kubelet[1964]: E0212 20:21:38.996117 1964 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1\": not found" containerID="c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1" Feb 12 20:21:38.996194 kubelet[1964]: I0212 20:21:38.996141 1964 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1} err="failed to get container status \"c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6c38a1fafcdf25016194da9dfb2dce313f7bac32e5bd56882535c41dedf5da1\": not found" Feb 12 20:21:38.998381 kubelet[1964]: I0212 20:21:38.998350 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-etc-cni-netd\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.998453 kubelet[1964]: I0212 20:21:38.998385 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-host-proc-sys-kernel\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.998453 kubelet[1964]: I0212 20:21:38.998411 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c20c170c-72e6-47ca-b7de-228c52a91eb5-cilium-config-path\") pod \"c20c170c-72e6-47ca-b7de-228c52a91eb5\" (UID: \"c20c170c-72e6-47ca-b7de-228c52a91eb5\") " Feb 12 20:21:38.998453 kubelet[1964]: I0212 20:21:38.998412 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:38.998453 kubelet[1964]: I0212 20:21:38.998432 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzhmr\" (UniqueName: \"kubernetes.io/projected/c20c170c-72e6-47ca-b7de-228c52a91eb5-kube-api-access-gzhmr\") pod \"c20c170c-72e6-47ca-b7de-228c52a91eb5\" (UID: \"c20c170c-72e6-47ca-b7de-228c52a91eb5\") " Feb 12 20:21:38.998453 kubelet[1964]: I0212 20:21:38.998440 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:38.998571 kubelet[1964]: I0212 20:21:38.998448 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-hostproc\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.998571 kubelet[1964]: I0212 20:21:38.998460 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-hostproc" (OuterVolumeSpecName: "hostproc") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:38.998571 kubelet[1964]: I0212 20:21:38.998497 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-cni-path\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.998571 kubelet[1964]: I0212 20:21:38.998513 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-bpf-maps\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.998571 kubelet[1964]: I0212 20:21:38.998533 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggcxq\" (UniqueName: \"kubernetes.io/projected/846f3880-60fc-42fd-afc6-5f43a83ac338-kube-api-access-ggcxq\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.998571 kubelet[1964]: I0212 20:21:38.998547 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-lib-modules\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.998704 kubelet[1964]: I0212 20:21:38.998575 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-xtables-lock\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.998704 kubelet[1964]: I0212 20:21:38.998594 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-run\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.998704 kubelet[1964]: W0212 20:21:38.998590 1964 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/c20c170c-72e6-47ca-b7de-228c52a91eb5/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:21:38.998819 kubelet[1964]: I0212 20:21:38.998800 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-cni-path" (OuterVolumeSpecName: "cni-path") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:38.998819 kubelet[1964]: I0212 20:21:38.998801 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:38.998937 kubelet[1964]: I0212 20:21:38.998818 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:38.998937 kubelet[1964]: I0212 20:21:38.998830 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:38.998937 kubelet[1964]: I0212 20:21:38.998846 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:38.998937 kubelet[1964]: I0212 20:21:38.998613 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/846f3880-60fc-42fd-afc6-5f43a83ac338-clustermesh-secrets\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.998937 kubelet[1964]: I0212 20:21:38.998874 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/846f3880-60fc-42fd-afc6-5f43a83ac338-hubble-tls\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.999069 kubelet[1964]: I0212 20:21:38.998897 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-config-path\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.999069 kubelet[1964]: I0212 20:21:38.998914 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-cgroup\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.999069 kubelet[1964]: I0212 20:21:38.998929 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-host-proc-sys-net\") pod \"846f3880-60fc-42fd-afc6-5f43a83ac338\" (UID: \"846f3880-60fc-42fd-afc6-5f43a83ac338\") " Feb 12 20:21:38.999069 kubelet[1964]: I0212 20:21:38.998954 1964 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:38.999069 kubelet[1964]: I0212 20:21:38.998963 1964 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:38.999069 kubelet[1964]: I0212 20:21:38.998971 1964 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:38.999069 kubelet[1964]: I0212 20:21:38.998979 1964 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:38.999216 kubelet[1964]: I0212 20:21:38.998997 1964 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:38.999216 kubelet[1964]: I0212 20:21:38.999005 1964 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:38.999216 kubelet[1964]: I0212 20:21:38.999013 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:38.999216 kubelet[1964]: I0212 20:21:38.999020 1964 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:38.999216 kubelet[1964]: I0212 20:21:38.999034 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:38.999216 kubelet[1964]: I0212 20:21:38.999051 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:38.999216 kubelet[1964]: W0212 20:21:38.999116 1964 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/846f3880-60fc-42fd-afc6-5f43a83ac338/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:21:39.000646 kubelet[1964]: I0212 20:21:39.000617 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c20c170c-72e6-47ca-b7de-228c52a91eb5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c20c170c-72e6-47ca-b7de-228c52a91eb5" (UID: "c20c170c-72e6-47ca-b7de-228c52a91eb5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:21:39.000749 kubelet[1964]: I0212 20:21:39.000720 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:21:39.001675 kubelet[1964]: I0212 20:21:39.001654 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/846f3880-60fc-42fd-afc6-5f43a83ac338-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:21:39.001972 kubelet[1964]: I0212 20:21:39.001948 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20c170c-72e6-47ca-b7de-228c52a91eb5-kube-api-access-gzhmr" (OuterVolumeSpecName: "kube-api-access-gzhmr") pod "c20c170c-72e6-47ca-b7de-228c52a91eb5" (UID: "c20c170c-72e6-47ca-b7de-228c52a91eb5"). InnerVolumeSpecName "kube-api-access-gzhmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:21:39.003574 kubelet[1964]: I0212 20:21:39.003544 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846f3880-60fc-42fd-afc6-5f43a83ac338-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:21:39.003625 kubelet[1964]: I0212 20:21:39.003603 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/846f3880-60fc-42fd-afc6-5f43a83ac338-kube-api-access-ggcxq" (OuterVolumeSpecName: "kube-api-access-ggcxq") pod "846f3880-60fc-42fd-afc6-5f43a83ac338" (UID: "846f3880-60fc-42fd-afc6-5f43a83ac338"). InnerVolumeSpecName "kube-api-access-ggcxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:21:39.100067 kubelet[1964]: I0212 20:21:39.099981 1964 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gzhmr\" (UniqueName: \"kubernetes.io/projected/c20c170c-72e6-47ca-b7de-228c52a91eb5-kube-api-access-gzhmr\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:39.100067 kubelet[1964]: I0212 20:21:39.100055 1964 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ggcxq\" (UniqueName: \"kubernetes.io/projected/846f3880-60fc-42fd-afc6-5f43a83ac338-kube-api-access-ggcxq\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:39.100067 kubelet[1964]: I0212 20:21:39.100065 1964 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/846f3880-60fc-42fd-afc6-5f43a83ac338-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:39.100067 kubelet[1964]: I0212 20:21:39.100074 1964 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/846f3880-60fc-42fd-afc6-5f43a83ac338-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:39.100067 kubelet[1964]: I0212 20:21:39.100083 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:39.100345 kubelet[1964]: I0212 20:21:39.100096 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:39.100345 kubelet[1964]: I0212 20:21:39.100103 1964 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/846f3880-60fc-42fd-afc6-5f43a83ac338-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:39.100345 kubelet[1964]: I0212 20:21:39.100111 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c20c170c-72e6-47ca-b7de-228c52a91eb5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:39.273484 systemd[1]: Removed slice kubepods-besteffort-podc20c170c_72e6_47ca_b7de_228c52a91eb5.slice. Feb 12 20:21:39.278468 systemd[1]: Removed slice kubepods-burstable-pod846f3880_60fc_42fd_afc6_5f43a83ac338.slice. Feb 12 20:21:39.278570 systemd[1]: kubepods-burstable-pod846f3880_60fc_42fd_afc6_5f43a83ac338.slice: Consumed 7.295s CPU time. Feb 12 20:21:39.749567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b-rootfs.mount: Deactivated successfully. Feb 12 20:21:39.749652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-50e6f2008ed5cadccc47c68c8caa15a832bbb3160176086ceb5b5474dc18881b-shm.mount: Deactivated successfully. Feb 12 20:21:39.749708 systemd[1]: var-lib-kubelet-pods-846f3880\x2d60fc\x2d42fd\x2dafc6\x2d5f43a83ac338-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:21:39.749758 systemd[1]: var-lib-kubelet-pods-846f3880\x2d60fc\x2d42fd\x2dafc6\x2d5f43a83ac338-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:21:39.749821 systemd[1]: var-lib-kubelet-pods-846f3880\x2d60fc\x2d42fd\x2dafc6\x2d5f43a83ac338-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dggcxq.mount: Deactivated successfully. Feb 12 20:21:39.749869 systemd[1]: var-lib-kubelet-pods-c20c170c\x2d72e6\x2d47ca\x2db7de\x2d228c52a91eb5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgzhmr.mount: Deactivated successfully. Feb 12 20:21:40.717684 sshd[3603]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:40.720880 systemd[1]: Started sshd@23-10.0.0.33:22-10.0.0.1:51414.service. Feb 12 20:21:40.721257 systemd[1]: sshd@22-10.0.0.33:22-10.0.0.1:51400.service: Deactivated successfully. Feb 12 20:21:40.721790 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 20:21:40.722326 systemd-logind[1104]: Session 23 logged out. Waiting for processes to exit. Feb 12 20:21:40.723036 systemd-logind[1104]: Removed session 23. Feb 12 20:21:40.750154 sshd[3767]: Accepted publickey for core from 10.0.0.1 port 51414 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:40.751019 sshd[3767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:40.753873 systemd-logind[1104]: New session 24 of user core. Feb 12 20:21:40.754675 systemd[1]: Started session-24.scope. Feb 12 20:21:40.814797 kubelet[1964]: I0212 20:21:40.814760 1964 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=846f3880-60fc-42fd-afc6-5f43a83ac338 path="/var/lib/kubelet/pods/846f3880-60fc-42fd-afc6-5f43a83ac338/volumes" Feb 12 20:21:40.815502 kubelet[1964]: I0212 20:21:40.815487 1964 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=c20c170c-72e6-47ca-b7de-228c52a91eb5 path="/var/lib/kubelet/pods/c20c170c-72e6-47ca-b7de-228c52a91eb5/volumes" Feb 12 20:21:41.280486 sshd[3767]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:41.284740 systemd[1]: Started sshd@24-10.0.0.33:22-10.0.0.1:51426.service. Feb 12 20:21:41.290248 systemd[1]: sshd@23-10.0.0.33:22-10.0.0.1:51414.service: Deactivated successfully. Feb 12 20:21:41.290901 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 20:21:41.291494 systemd-logind[1104]: Session 24 logged out. Waiting for processes to exit. Feb 12 20:21:41.292588 systemd-logind[1104]: Removed session 24. Feb 12 20:21:41.295356 kubelet[1964]: I0212 20:21:41.295291 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:21:41.295356 kubelet[1964]: E0212 20:21:41.295357 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="846f3880-60fc-42fd-afc6-5f43a83ac338" containerName="cilium-agent" Feb 12 20:21:41.295356 kubelet[1964]: E0212 20:21:41.295367 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="846f3880-60fc-42fd-afc6-5f43a83ac338" containerName="mount-cgroup" Feb 12 20:21:41.295535 kubelet[1964]: E0212 20:21:41.295373 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="846f3880-60fc-42fd-afc6-5f43a83ac338" containerName="apply-sysctl-overwrites" Feb 12 20:21:41.295535 kubelet[1964]: E0212 20:21:41.295379 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="846f3880-60fc-42fd-afc6-5f43a83ac338" containerName="mount-bpf-fs" Feb 12 20:21:41.295535 kubelet[1964]: E0212 20:21:41.295384 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c20c170c-72e6-47ca-b7de-228c52a91eb5" containerName="cilium-operator" Feb 12 20:21:41.295535 kubelet[1964]: E0212 20:21:41.295390 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="846f3880-60fc-42fd-afc6-5f43a83ac338" containerName="clean-cilium-state" Feb 12 20:21:41.295535 kubelet[1964]: I0212 20:21:41.295421 1964 memory_manager.go:346] "RemoveStaleState removing state" podUID="846f3880-60fc-42fd-afc6-5f43a83ac338" containerName="cilium-agent" Feb 12 20:21:41.295535 kubelet[1964]: I0212 20:21:41.295428 1964 memory_manager.go:346] "RemoveStaleState removing state" podUID="c20c170c-72e6-47ca-b7de-228c52a91eb5" containerName="cilium-operator" Feb 12 20:21:41.299787 systemd[1]: Created slice kubepods-burstable-pod77d5c6c1_52a8_4e1c_acfc_ac89d36c77e9.slice. Feb 12 20:21:41.301307 kubelet[1964]: W0212 20:21:41.301281 1964 reflector.go:533] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 12 20:21:41.301307 kubelet[1964]: E0212 20:21:41.301310 1964 reflector.go:148] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 12 20:21:41.319048 sshd[3779]: Accepted publickey for core from 10.0.0.1 port 51426 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:41.320111 sshd[3779]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:41.324952 systemd[1]: Started session-25.scope. Feb 12 20:21:41.325532 systemd-logind[1104]: New session 25 of user core. Feb 12 20:21:41.411598 kubelet[1964]: I0212 20:21:41.411560 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-clustermesh-secrets\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.411598 kubelet[1964]: I0212 20:21:41.411606 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-host-proc-sys-kernel\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.411598 kubelet[1964]: I0212 20:21:41.411623 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78sg4\" (UniqueName: \"kubernetes.io/projected/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-kube-api-access-78sg4\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.411827 kubelet[1964]: I0212 20:21:41.411705 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-etc-cni-netd\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.411827 kubelet[1964]: I0212 20:21:41.411748 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-xtables-lock\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.411827 kubelet[1964]: I0212 20:21:41.411780 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-hubble-tls\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.411827 kubelet[1964]: I0212 20:21:41.411820 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-cgroup\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.411923 kubelet[1964]: I0212 20:21:41.411850 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-config-path\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.411923 kubelet[1964]: I0212 20:21:41.411878 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-run\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.411923 kubelet[1964]: I0212 20:21:41.411903 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cni-path\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.412004 kubelet[1964]: I0212 20:21:41.411928 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-ipsec-secrets\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.412004 kubelet[1964]: I0212 20:21:41.411954 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-bpf-maps\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.412004 kubelet[1964]: I0212 20:21:41.411978 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-host-proc-sys-net\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.412081 kubelet[1964]: I0212 20:21:41.412025 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-hostproc\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.412081 kubelet[1964]: I0212 20:21:41.412057 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-lib-modules\") pod \"cilium-x26jt\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " pod="kube-system/cilium-x26jt" Feb 12 20:21:41.435950 sshd[3779]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:41.439139 systemd[1]: Started sshd@25-10.0.0.33:22-10.0.0.1:51430.service. Feb 12 20:21:41.439537 systemd[1]: sshd@24-10.0.0.33:22-10.0.0.1:51426.service: Deactivated successfully. Feb 12 20:21:41.440234 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 20:21:41.441612 systemd-logind[1104]: Session 25 logged out. Waiting for processes to exit. Feb 12 20:21:41.443030 systemd-logind[1104]: Removed session 25. Feb 12 20:21:41.475335 sshd[3792]: Accepted publickey for core from 10.0.0.1 port 51430 ssh2: RSA SHA256:0TYEZ+ET1/q46mu/UgoG+6MHo530B5lZPCcmxDlyeg8 Feb 12 20:21:41.476396 sshd[3792]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:21:41.479220 systemd-logind[1104]: New session 26 of user core. Feb 12 20:21:41.480006 systemd[1]: Started session-26.scope. Feb 12 20:21:41.812814 kubelet[1964]: E0212 20:21:41.812782 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:42.802877 kubelet[1964]: E0212 20:21:42.802817 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:42.803436 env[1123]: time="2024-02-12T20:21:42.803375556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x26jt,Uid:77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9,Namespace:kube-system,Attempt:0,}" Feb 12 20:21:42.875783 kubelet[1964]: E0212 20:21:42.875756 1964 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:21:42.979783 env[1123]: time="2024-02-12T20:21:42.979684333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:21:42.979783 env[1123]: time="2024-02-12T20:21:42.979738263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:21:42.980006 env[1123]: time="2024-02-12T20:21:42.979755546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:21:42.980006 env[1123]: time="2024-02-12T20:21:42.979925171Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a pid=3814 runtime=io.containerd.runc.v2 Feb 12 20:21:42.997327 systemd[1]: Started cri-containerd-92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a.scope. Feb 12 20:21:43.016248 env[1123]: time="2024-02-12T20:21:43.016205648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x26jt,Uid:77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a\"" Feb 12 20:21:43.017051 kubelet[1964]: E0212 20:21:43.017030 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:43.019035 env[1123]: time="2024-02-12T20:21:43.019000242Z" level=info msg="CreateContainer within sandbox \"92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:21:43.273861 env[1123]: time="2024-02-12T20:21:43.273799340Z" level=info msg="CreateContainer within sandbox \"92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414\"" Feb 12 20:21:43.274393 env[1123]: time="2024-02-12T20:21:43.274367067Z" level=info msg="StartContainer for \"eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414\"" Feb 12 20:21:43.287728 systemd[1]: Started cri-containerd-eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414.scope. Feb 12 20:21:43.296095 systemd[1]: cri-containerd-eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414.scope: Deactivated successfully. Feb 12 20:21:43.296401 systemd[1]: Stopped cri-containerd-eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414.scope. Feb 12 20:21:43.327340 env[1123]: time="2024-02-12T20:21:43.327277048Z" level=info msg="shim disconnected" id=eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414 Feb 12 20:21:43.327340 env[1123]: time="2024-02-12T20:21:43.327342941Z" level=warning msg="cleaning up after shim disconnected" id=eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414 namespace=k8s.io Feb 12 20:21:43.327556 env[1123]: time="2024-02-12T20:21:43.327353250Z" level=info msg="cleaning up dead shim" Feb 12 20:21:43.333723 env[1123]: time="2024-02-12T20:21:43.333662250Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3873 runtime=io.containerd.runc.v2\ntime=\"2024-02-12T20:21:43Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 12 20:21:43.334037 env[1123]: time="2024-02-12T20:21:43.333909772Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Feb 12 20:21:43.340078 env[1123]: time="2024-02-12T20:21:43.340028097Z" level=error msg="Failed to pipe stdout of container \"eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414\"" error="reading from a closed fifo" Feb 12 20:21:43.340078 env[1123]: time="2024-02-12T20:21:43.340036662Z" level=error msg="Failed to pipe stderr of container \"eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414\"" error="reading from a closed fifo" Feb 12 20:21:43.342287 env[1123]: time="2024-02-12T20:21:43.342247620Z" level=error msg="StartContainer for \"eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 12 20:21:43.342545 kubelet[1964]: E0212 20:21:43.342521 1964 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414" Feb 12 20:21:43.342688 kubelet[1964]: E0212 20:21:43.342669 1964 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 12 20:21:43.342688 kubelet[1964]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 12 20:21:43.342688 kubelet[1964]: rm /hostbin/cilium-mount Feb 12 20:21:43.342809 kubelet[1964]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-78sg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-x26jt_kube-system(77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 12 20:21:43.342809 kubelet[1964]: E0212 20:21:43.342726 1964 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-x26jt" podUID=77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9 Feb 12 20:21:43.517252 systemd[1]: run-containerd-runc-k8s.io-92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a-runc.ea7WNo.mount: Deactivated successfully. Feb 12 20:21:43.984459 env[1123]: time="2024-02-12T20:21:43.984416489Z" level=info msg="StopPodSandbox for \"92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a\"" Feb 12 20:21:43.984854 env[1123]: time="2024-02-12T20:21:43.984469728Z" level=info msg="Container to stop \"eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 12 20:21:43.986468 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a-shm.mount: Deactivated successfully. Feb 12 20:21:43.990057 systemd[1]: cri-containerd-92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a.scope: Deactivated successfully. Feb 12 20:21:44.007815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a-rootfs.mount: Deactivated successfully. Feb 12 20:21:44.011281 env[1123]: time="2024-02-12T20:21:44.011231080Z" level=info msg="shim disconnected" id=92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a Feb 12 20:21:44.011373 env[1123]: time="2024-02-12T20:21:44.011287825Z" level=warning msg="cleaning up after shim disconnected" id=92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a namespace=k8s.io Feb 12 20:21:44.011373 env[1123]: time="2024-02-12T20:21:44.011300128Z" level=info msg="cleaning up dead shim" Feb 12 20:21:44.017583 env[1123]: time="2024-02-12T20:21:44.017520551Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3903 runtime=io.containerd.runc.v2\n" Feb 12 20:21:44.017847 env[1123]: time="2024-02-12T20:21:44.017819940Z" level=info msg="TearDown network for sandbox \"92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a\" successfully" Feb 12 20:21:44.017847 env[1123]: time="2024-02-12T20:21:44.017843995Z" level=info msg="StopPodSandbox for \"92f9ae73b8c83901d9fb7b487080bc7daa87f932c0b31402df857a3281034b8a\" returns successfully" Feb 12 20:21:44.129219 kubelet[1964]: I0212 20:21:44.129154 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cni-path\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.129219 kubelet[1964]: I0212 20:21:44.129220 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-cgroup\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.129219 kubelet[1964]: I0212 20:21:44.129237 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-xtables-lock\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129252 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-bpf-maps\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129256 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cni-path" (OuterVolumeSpecName: "cni-path") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129289 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129264 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129276 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-host-proc-sys-kernel\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129313 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129316 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129338 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78sg4\" (UniqueName: \"kubernetes.io/projected/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-kube-api-access-78sg4\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129359 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-hubble-tls\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129376 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-run\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129391 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-hostproc\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129409 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-config-path\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129425 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-host-proc-sys-net\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129443 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-clustermesh-secrets\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.129703 kubelet[1964]: I0212 20:21:44.129452 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-hostproc" (OuterVolumeSpecName: "hostproc") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129461 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-ipsec-secrets\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129473 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129479 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-etc-cni-netd\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129497 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129516 1964 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-lib-modules\") pod \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\" (UID: \"77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9\") " Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129557 1964 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129584 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129593 1964 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cni-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129601 1964 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129609 1964 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129620 1964 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129629 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-run\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.130189 kubelet[1964]: W0212 20:21:44.129621 1964 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129661 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129705 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 12 20:21:44.130189 kubelet[1964]: I0212 20:21:44.129637 1964 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-hostproc\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.132495 kubelet[1964]: I0212 20:21:44.131246 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 12 20:21:44.133019 kubelet[1964]: I0212 20:21:44.132968 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:21:44.133025 systemd[1]: var-lib-kubelet-pods-77d5c6c1\x2d52a8\x2d4e1c\x2dacfc\x2dac89d36c77e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d78sg4.mount: Deactivated successfully. Feb 12 20:21:44.133219 kubelet[1964]: I0212 20:21:44.133018 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:21:44.134039 kubelet[1964]: I0212 20:21:44.134009 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-kube-api-access-78sg4" (OuterVolumeSpecName: "kube-api-access-78sg4") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "kube-api-access-78sg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 12 20:21:44.134473 systemd[1]: var-lib-kubelet-pods-77d5c6c1\x2d52a8\x2d4e1c\x2dacfc\x2dac89d36c77e9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 12 20:21:44.134539 systemd[1]: var-lib-kubelet-pods-77d5c6c1\x2d52a8\x2d4e1c\x2dacfc\x2dac89d36c77e9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 12 20:21:44.134804 kubelet[1964]: I0212 20:21:44.134780 1964 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" (UID: "77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 12 20:21:44.230472 kubelet[1964]: I0212 20:21:44.230367 1964 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.230472 kubelet[1964]: I0212 20:21:44.230449 1964 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-78sg4\" (UniqueName: \"kubernetes.io/projected/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-kube-api-access-78sg4\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.230472 kubelet[1964]: I0212 20:21:44.230465 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.230472 kubelet[1964]: I0212 20:21:44.230480 1964 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.230472 kubelet[1964]: I0212 20:21:44.230493 1964 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.230804 kubelet[1964]: I0212 20:21:44.230509 1964 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.230804 kubelet[1964]: I0212 20:21:44.230525 1964 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 12 20:21:44.517346 systemd[1]: var-lib-kubelet-pods-77d5c6c1\x2d52a8\x2d4e1c\x2dacfc\x2dac89d36c77e9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 12 20:21:44.583119 kubelet[1964]: I0212 20:21:44.583077 1964 setters.go:548] "Node became not ready" node="localhost" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-12 20:21:44.583023602 +0000 UTC m=+81.854145332 LastTransitionTime:2024-02-12 20:21:44.583023602 +0000 UTC m=+81.854145332 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 12 20:21:44.817836 systemd[1]: Removed slice kubepods-burstable-pod77d5c6c1_52a8_4e1c_acfc_ac89d36c77e9.slice. Feb 12 20:21:44.986974 kubelet[1964]: I0212 20:21:44.986933 1964 scope.go:115] "RemoveContainer" containerID="eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414" Feb 12 20:21:44.988537 env[1123]: time="2024-02-12T20:21:44.988504391Z" level=info msg="RemoveContainer for \"eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414\"" Feb 12 20:21:45.097417 env[1123]: time="2024-02-12T20:21:45.097288571Z" level=info msg="RemoveContainer for \"eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414\" returns successfully" Feb 12 20:21:45.166770 kubelet[1964]: I0212 20:21:45.166726 1964 topology_manager.go:212] "Topology Admit Handler" Feb 12 20:21:45.167166 kubelet[1964]: E0212 20:21:45.166787 1964 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" containerName="mount-cgroup" Feb 12 20:21:45.167166 kubelet[1964]: I0212 20:21:45.166809 1964 memory_manager.go:346] "RemoveStaleState removing state" podUID="77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9" containerName="mount-cgroup" Feb 12 20:21:45.171533 systemd[1]: Created slice kubepods-burstable-pod959a1f4d_08c6_4650_bdf4_90cb3a1f0e77.slice. Feb 12 20:21:45.336333 kubelet[1964]: I0212 20:21:45.336288 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-cilium-run\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336333 kubelet[1964]: I0212 20:21:45.336333 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-hostproc\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336333 kubelet[1964]: I0212 20:21:45.336350 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-hubble-tls\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336568 kubelet[1964]: I0212 20:21:45.336421 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-lib-modules\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336568 kubelet[1964]: I0212 20:21:45.336446 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-xtables-lock\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336568 kubelet[1964]: I0212 20:21:45.336464 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-host-proc-sys-kernel\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336568 kubelet[1964]: I0212 20:21:45.336521 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-bpf-maps\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336568 kubelet[1964]: I0212 20:21:45.336571 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-cilium-cgroup\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336689 kubelet[1964]: I0212 20:21:45.336600 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-cni-path\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336689 kubelet[1964]: I0212 20:21:45.336633 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-cilium-config-path\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336689 kubelet[1964]: I0212 20:21:45.336653 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-host-proc-sys-net\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336689 kubelet[1964]: I0212 20:21:45.336669 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-etc-cni-netd\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336779 kubelet[1964]: I0212 20:21:45.336703 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-clustermesh-secrets\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336779 kubelet[1964]: I0212 20:21:45.336720 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-cilium-ipsec-secrets\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:45.336779 kubelet[1964]: I0212 20:21:45.336737 1964 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv2t9\" (UniqueName: \"kubernetes.io/projected/959a1f4d-08c6-4650-bdf4-90cb3a1f0e77-kube-api-access-wv2t9\") pod \"cilium-d2flj\" (UID: \"959a1f4d-08c6-4650-bdf4-90cb3a1f0e77\") " pod="kube-system/cilium-d2flj" Feb 12 20:21:46.074086 kubelet[1964]: E0212 20:21:46.074023 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:46.074606 env[1123]: time="2024-02-12T20:21:46.074558168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d2flj,Uid:959a1f4d-08c6-4650-bdf4-90cb3a1f0e77,Namespace:kube-system,Attempt:0,}" Feb 12 20:21:46.202166 env[1123]: time="2024-02-12T20:21:46.202026884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:21:46.202166 env[1123]: time="2024-02-12T20:21:46.202142820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:21:46.202166 env[1123]: time="2024-02-12T20:21:46.202159031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:21:46.202386 env[1123]: time="2024-02-12T20:21:46.202339269Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf pid=3931 runtime=io.containerd.runc.v2 Feb 12 20:21:46.214907 systemd[1]: run-containerd-runc-k8s.io-cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf-runc.wLCGBr.mount: Deactivated successfully. Feb 12 20:21:46.217794 systemd[1]: Started cri-containerd-cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf.scope. Feb 12 20:21:46.233359 env[1123]: time="2024-02-12T20:21:46.233310671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d2flj,Uid:959a1f4d-08c6-4650-bdf4-90cb3a1f0e77,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf\"" Feb 12 20:21:46.234093 kubelet[1964]: E0212 20:21:46.234072 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:46.236306 env[1123]: time="2024-02-12T20:21:46.236225835Z" level=info msg="CreateContainer within sandbox \"cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 12 20:21:46.307403 env[1123]: time="2024-02-12T20:21:46.307340671Z" level=info msg="CreateContainer within sandbox \"cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ec765a13e7f57ab25ebefef85714a0f0a8afdfd25ed9d1117064564a5dcaecac\"" Feb 12 20:21:46.308308 env[1123]: time="2024-02-12T20:21:46.308269348Z" level=info msg="StartContainer for \"ec765a13e7f57ab25ebefef85714a0f0a8afdfd25ed9d1117064564a5dcaecac\"" Feb 12 20:21:46.321821 systemd[1]: Started cri-containerd-ec765a13e7f57ab25ebefef85714a0f0a8afdfd25ed9d1117064564a5dcaecac.scope. Feb 12 20:21:46.352631 systemd[1]: cri-containerd-ec765a13e7f57ab25ebefef85714a0f0a8afdfd25ed9d1117064564a5dcaecac.scope: Deactivated successfully. Feb 12 20:21:46.353421 env[1123]: time="2024-02-12T20:21:46.353368272Z" level=info msg="StartContainer for \"ec765a13e7f57ab25ebefef85714a0f0a8afdfd25ed9d1117064564a5dcaecac\" returns successfully" Feb 12 20:21:46.395774 env[1123]: time="2024-02-12T20:21:46.395725226Z" level=info msg="shim disconnected" id=ec765a13e7f57ab25ebefef85714a0f0a8afdfd25ed9d1117064564a5dcaecac Feb 12 20:21:46.395774 env[1123]: time="2024-02-12T20:21:46.395773897Z" level=warning msg="cleaning up after shim disconnected" id=ec765a13e7f57ab25ebefef85714a0f0a8afdfd25ed9d1117064564a5dcaecac namespace=k8s.io Feb 12 20:21:46.396004 env[1123]: time="2024-02-12T20:21:46.395782353Z" level=info msg="cleaning up dead shim" Feb 12 20:21:46.401316 env[1123]: time="2024-02-12T20:21:46.401292762Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4012 runtime=io.containerd.runc.v2\n" Feb 12 20:21:46.433356 kubelet[1964]: W0212 20:21:46.433292 1964 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77d5c6c1_52a8_4e1c_acfc_ac89d36c77e9.slice/cri-containerd-eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414.scope WatchSource:0}: container "eafe41eddae917240ea746b277cd28906e3ff621957e71e3d3c66f95f4b38414" in namespace "k8s.io": not found Feb 12 20:21:46.814536 kubelet[1964]: I0212 20:21:46.814506 1964 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9 path="/var/lib/kubelet/pods/77d5c6c1-52a8-4e1c-acfc-ac89d36c77e9/volumes" Feb 12 20:21:46.995798 kubelet[1964]: E0212 20:21:46.995772 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:46.997287 env[1123]: time="2024-02-12T20:21:46.997250585Z" level=info msg="CreateContainer within sandbox \"cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 12 20:21:47.009070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2515016943.mount: Deactivated successfully. Feb 12 20:21:47.013955 env[1123]: time="2024-02-12T20:21:47.013904236Z" level=info msg="CreateContainer within sandbox \"cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9971e10eb0e1d2152951e5d7c7577be6e88058e10159dc31149dd35a11237e1c\"" Feb 12 20:21:47.014617 env[1123]: time="2024-02-12T20:21:47.014576194Z" level=info msg="StartContainer for \"9971e10eb0e1d2152951e5d7c7577be6e88058e10159dc31149dd35a11237e1c\"" Feb 12 20:21:47.029347 systemd[1]: Started cri-containerd-9971e10eb0e1d2152951e5d7c7577be6e88058e10159dc31149dd35a11237e1c.scope. Feb 12 20:21:47.056805 systemd[1]: cri-containerd-9971e10eb0e1d2152951e5d7c7577be6e88058e10159dc31149dd35a11237e1c.scope: Deactivated successfully. Feb 12 20:21:47.110491 env[1123]: time="2024-02-12T20:21:47.110388499Z" level=info msg="StartContainer for \"9971e10eb0e1d2152951e5d7c7577be6e88058e10159dc31149dd35a11237e1c\" returns successfully" Feb 12 20:21:47.263437 env[1123]: time="2024-02-12T20:21:47.263388643Z" level=info msg="shim disconnected" id=9971e10eb0e1d2152951e5d7c7577be6e88058e10159dc31149dd35a11237e1c Feb 12 20:21:47.263437 env[1123]: time="2024-02-12T20:21:47.263434179Z" level=warning msg="cleaning up after shim disconnected" id=9971e10eb0e1d2152951e5d7c7577be6e88058e10159dc31149dd35a11237e1c namespace=k8s.io Feb 12 20:21:47.263437 env[1123]: time="2024-02-12T20:21:47.263442735Z" level=info msg="cleaning up dead shim" Feb 12 20:21:47.268843 env[1123]: time="2024-02-12T20:21:47.268801745Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4073 runtime=io.containerd.runc.v2\n" Feb 12 20:21:47.876836 kubelet[1964]: E0212 20:21:47.876789 1964 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 12 20:21:47.993797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9971e10eb0e1d2152951e5d7c7577be6e88058e10159dc31149dd35a11237e1c-rootfs.mount: Deactivated successfully. Feb 12 20:21:47.999370 kubelet[1964]: E0212 20:21:47.999344 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:48.000806 env[1123]: time="2024-02-12T20:21:48.000772156Z" level=info msg="CreateContainer within sandbox \"cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 12 20:21:48.233372 env[1123]: time="2024-02-12T20:21:48.233200130Z" level=info msg="CreateContainer within sandbox \"cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"833b888f42de1da08843b92c6c15765ee21260fb527a071cb1c97244a2efaa67\"" Feb 12 20:21:48.234289 env[1123]: time="2024-02-12T20:21:48.234062288Z" level=info msg="StartContainer for \"833b888f42de1da08843b92c6c15765ee21260fb527a071cb1c97244a2efaa67\"" Feb 12 20:21:48.251009 systemd[1]: Started cri-containerd-833b888f42de1da08843b92c6c15765ee21260fb527a071cb1c97244a2efaa67.scope. Feb 12 20:21:48.275407 env[1123]: time="2024-02-12T20:21:48.275171160Z" level=info msg="StartContainer for \"833b888f42de1da08843b92c6c15765ee21260fb527a071cb1c97244a2efaa67\" returns successfully" Feb 12 20:21:48.276106 systemd[1]: cri-containerd-833b888f42de1da08843b92c6c15765ee21260fb527a071cb1c97244a2efaa67.scope: Deactivated successfully. Feb 12 20:21:48.297799 env[1123]: time="2024-02-12T20:21:48.297721315Z" level=info msg="shim disconnected" id=833b888f42de1da08843b92c6c15765ee21260fb527a071cb1c97244a2efaa67 Feb 12 20:21:48.297799 env[1123]: time="2024-02-12T20:21:48.297785776Z" level=warning msg="cleaning up after shim disconnected" id=833b888f42de1da08843b92c6c15765ee21260fb527a071cb1c97244a2efaa67 namespace=k8s.io Feb 12 20:21:48.297799 env[1123]: time="2024-02-12T20:21:48.297797608Z" level=info msg="cleaning up dead shim" Feb 12 20:21:48.304382 env[1123]: time="2024-02-12T20:21:48.304323171Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4129 runtime=io.containerd.runc.v2\n" Feb 12 20:21:48.993848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-833b888f42de1da08843b92c6c15765ee21260fb527a071cb1c97244a2efaa67-rootfs.mount: Deactivated successfully. Feb 12 20:21:49.003288 kubelet[1964]: E0212 20:21:49.003153 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:49.005638 env[1123]: time="2024-02-12T20:21:49.005540477Z" level=info msg="CreateContainer within sandbox \"cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 12 20:21:49.191759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount681916450.mount: Deactivated successfully. Feb 12 20:21:49.403510 env[1123]: time="2024-02-12T20:21:49.403458802Z" level=info msg="CreateContainer within sandbox \"cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7be8729932c7de15be8dcc1c52383344d235e1c3264d2cc4e8384e20f224fa80\"" Feb 12 20:21:49.404578 env[1123]: time="2024-02-12T20:21:49.403902315Z" level=info msg="StartContainer for \"7be8729932c7de15be8dcc1c52383344d235e1c3264d2cc4e8384e20f224fa80\"" Feb 12 20:21:49.419973 systemd[1]: Started cri-containerd-7be8729932c7de15be8dcc1c52383344d235e1c3264d2cc4e8384e20f224fa80.scope. Feb 12 20:21:49.437050 systemd[1]: cri-containerd-7be8729932c7de15be8dcc1c52383344d235e1c3264d2cc4e8384e20f224fa80.scope: Deactivated successfully. Feb 12 20:21:49.502730 env[1123]: time="2024-02-12T20:21:49.502607835Z" level=info msg="StartContainer for \"7be8729932c7de15be8dcc1c52383344d235e1c3264d2cc4e8384e20f224fa80\" returns successfully" Feb 12 20:21:49.536408 env[1123]: time="2024-02-12T20:21:49.536352174Z" level=info msg="shim disconnected" id=7be8729932c7de15be8dcc1c52383344d235e1c3264d2cc4e8384e20f224fa80 Feb 12 20:21:49.536408 env[1123]: time="2024-02-12T20:21:49.536409431Z" level=warning msg="cleaning up after shim disconnected" id=7be8729932c7de15be8dcc1c52383344d235e1c3264d2cc4e8384e20f224fa80 namespace=k8s.io Feb 12 20:21:49.536676 env[1123]: time="2024-02-12T20:21:49.536424460Z" level=info msg="cleaning up dead shim" Feb 12 20:21:49.542314 env[1123]: time="2024-02-12T20:21:49.542268899Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:21:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4183 runtime=io.containerd.runc.v2\n" Feb 12 20:21:49.545343 kubelet[1964]: W0212 20:21:49.545304 1964 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod959a1f4d_08c6_4650_bdf4_90cb3a1f0e77.slice/cri-containerd-ec765a13e7f57ab25ebefef85714a0f0a8afdfd25ed9d1117064564a5dcaecac.scope WatchSource:0}: task ec765a13e7f57ab25ebefef85714a0f0a8afdfd25ed9d1117064564a5dcaecac not found: not found Feb 12 20:21:49.812495 kubelet[1964]: E0212 20:21:49.812459 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:49.993962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7be8729932c7de15be8dcc1c52383344d235e1c3264d2cc4e8384e20f224fa80-rootfs.mount: Deactivated successfully. Feb 12 20:21:50.006155 kubelet[1964]: E0212 20:21:50.006136 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:50.008387 env[1123]: time="2024-02-12T20:21:50.008350939Z" level=info msg="CreateContainer within sandbox \"cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 12 20:21:50.022970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2583427904.mount: Deactivated successfully. Feb 12 20:21:50.028853 env[1123]: time="2024-02-12T20:21:50.028807145Z" level=info msg="CreateContainer within sandbox \"cfa977ff89d6b133b82f42517d22667e1df3a4721eb679e06ea68bbe8b7d40cf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aab1cb90dbae6ec12f91aeab6ca025843dadca427f7dfc93ffcfef40c5d8fde3\"" Feb 12 20:21:50.029234 env[1123]: time="2024-02-12T20:21:50.029212668Z" level=info msg="StartContainer for \"aab1cb90dbae6ec12f91aeab6ca025843dadca427f7dfc93ffcfef40c5d8fde3\"" Feb 12 20:21:50.040961 systemd[1]: Started cri-containerd-aab1cb90dbae6ec12f91aeab6ca025843dadca427f7dfc93ffcfef40c5d8fde3.scope. Feb 12 20:21:50.063785 env[1123]: time="2024-02-12T20:21:50.063698914Z" level=info msg="StartContainer for \"aab1cb90dbae6ec12f91aeab6ca025843dadca427f7dfc93ffcfef40c5d8fde3\" returns successfully" Feb 12 20:21:50.312014 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 12 20:21:51.010634 kubelet[1964]: E0212 20:21:51.010602 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:51.023077 kubelet[1964]: I0212 20:21:51.023038 1964 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-d2flj" podStartSLOduration=6.022973815 podCreationTimestamp="2024-02-12 20:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:21:51.02215904 +0000 UTC m=+88.293280760" watchObservedRunningTime="2024-02-12 20:21:51.022973815 +0000 UTC m=+88.294095565" Feb 12 20:21:52.075568 kubelet[1964]: E0212 20:21:52.075521 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:52.651309 kubelet[1964]: W0212 20:21:52.651259 1964 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod959a1f4d_08c6_4650_bdf4_90cb3a1f0e77.slice/cri-containerd-9971e10eb0e1d2152951e5d7c7577be6e88058e10159dc31149dd35a11237e1c.scope WatchSource:0}: task 9971e10eb0e1d2152951e5d7c7577be6e88058e10159dc31149dd35a11237e1c not found: not found Feb 12 20:21:52.801531 systemd-networkd[1017]: lxc_health: Link UP Feb 12 20:21:52.820297 systemd-networkd[1017]: lxc_health: Gained carrier Feb 12 20:21:52.821009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 12 20:21:53.893084 systemd-networkd[1017]: lxc_health: Gained IPv6LL Feb 12 20:21:54.075543 kubelet[1964]: E0212 20:21:54.075505 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:54.490515 systemd[1]: run-containerd-runc-k8s.io-aab1cb90dbae6ec12f91aeab6ca025843dadca427f7dfc93ffcfef40c5d8fde3-runc.9RxTyY.mount: Deactivated successfully. Feb 12 20:21:55.017270 kubelet[1964]: E0212 20:21:55.017236 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:55.757755 kubelet[1964]: W0212 20:21:55.757697 1964 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod959a1f4d_08c6_4650_bdf4_90cb3a1f0e77.slice/cri-containerd-833b888f42de1da08843b92c6c15765ee21260fb527a071cb1c97244a2efaa67.scope WatchSource:0}: task 833b888f42de1da08843b92c6c15765ee21260fb527a071cb1c97244a2efaa67 not found: not found Feb 12 20:21:56.018636 kubelet[1964]: E0212 20:21:56.018546 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:56.573256 systemd[1]: run-containerd-runc-k8s.io-aab1cb90dbae6ec12f91aeab6ca025843dadca427f7dfc93ffcfef40c5d8fde3-runc.Ttemwm.mount: Deactivated successfully. Feb 12 20:21:57.812591 kubelet[1964]: E0212 20:21:57.812548 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:58.700723 sshd[3792]: pam_unix(sshd:session): session closed for user core Feb 12 20:21:58.703208 systemd[1]: sshd@25-10.0.0.33:22-10.0.0.1:51430.service: Deactivated successfully. Feb 12 20:21:58.703939 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 20:21:58.704624 systemd-logind[1104]: Session 26 logged out. Waiting for processes to exit. Feb 12 20:21:58.705254 systemd-logind[1104]: Removed session 26. Feb 12 20:21:58.812898 kubelet[1964]: E0212 20:21:58.812861 1964 dns.go:158] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 12 20:21:58.865521 kubelet[1964]: W0212 20:21:58.865482 1964 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod959a1f4d_08c6_4650_bdf4_90cb3a1f0e77.slice/cri-containerd-7be8729932c7de15be8dcc1c52383344d235e1c3264d2cc4e8384e20f224fa80.scope WatchSource:0}: task 7be8729932c7de15be8dcc1c52383344d235e1c3264d2cc4e8384e20f224fa80 not found: not found