Jul 2 07:53:27.794632 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:53:27.794649 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:53:27.794658 kernel: BIOS-provided physical RAM map: Jul 2 07:53:27.794664 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 07:53:27.794669 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 07:53:27.794674 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 07:53:27.794681 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 07:53:27.794687 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 07:53:27.794692 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 07:53:27.794699 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 07:53:27.794704 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 2 07:53:27.794709 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 2 07:53:27.794715 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 07:53:27.794720 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 07:53:27.794727 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 07:53:27.794734 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 07:53:27.794740 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 07:53:27.794745 kernel: NX (Execute Disable) protection: active Jul 2 07:53:27.794751 kernel: e820: update [mem 0x9b3fa018-0x9b403c57] usable ==> usable Jul 2 07:53:27.794757 kernel: e820: update [mem 0x9b3fa018-0x9b403c57] usable ==> usable Jul 2 07:53:27.794763 kernel: e820: update [mem 0x9b3bd018-0x9b3f9e57] usable ==> usable Jul 2 07:53:27.794768 kernel: e820: update [mem 0x9b3bd018-0x9b3f9e57] usable ==> usable Jul 2 07:53:27.794774 kernel: extended physical RAM map: Jul 2 07:53:27.794780 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 07:53:27.794786 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 07:53:27.794793 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 07:53:27.794798 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 07:53:27.794804 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 07:53:27.794810 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 07:53:27.794816 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 07:53:27.794821 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b3bd017] usable Jul 2 07:53:27.794827 kernel: reserve setup_data: [mem 0x000000009b3bd018-0x000000009b3f9e57] usable Jul 2 07:53:27.794833 kernel: reserve setup_data: [mem 0x000000009b3f9e58-0x000000009b3fa017] usable Jul 2 07:53:27.794838 kernel: reserve setup_data: [mem 0x000000009b3fa018-0x000000009b403c57] usable Jul 2 07:53:27.794844 kernel: reserve setup_data: [mem 0x000000009b403c58-0x000000009c8eefff] usable Jul 2 07:53:27.794850 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Jul 2 07:53:27.794857 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 07:53:27.794863 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 07:53:27.794868 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 07:53:27.794874 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 07:53:27.794883 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 07:53:27.794889 kernel: efi: EFI v2.70 by EDK II Jul 2 07:53:27.794895 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Jul 2 07:53:27.794902 kernel: random: crng init done Jul 2 07:53:27.794909 kernel: SMBIOS 2.8 present. Jul 2 07:53:27.794915 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jul 2 07:53:27.794921 kernel: Hypervisor detected: KVM Jul 2 07:53:27.794928 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:53:27.794934 kernel: kvm-clock: cpu 0, msr 53192001, primary cpu clock Jul 2 07:53:27.794940 kernel: kvm-clock: using sched offset of 4192884347 cycles Jul 2 07:53:27.794947 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:53:27.794954 kernel: tsc: Detected 2794.748 MHz processor Jul 2 07:53:27.794961 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:53:27.794968 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:53:27.794974 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 2 07:53:27.794981 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:53:27.794987 kernel: Using GB pages for direct mapping Jul 2 07:53:27.794993 kernel: Secure boot disabled Jul 2 07:53:27.795000 kernel: ACPI: Early table checksum verification disabled Jul 2 07:53:27.795006 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 2 07:53:27.795013 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jul 2 07:53:27.795020 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:53:27.795027 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:53:27.795033 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 2 07:53:27.795039 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:53:27.795046 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:53:27.795052 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:53:27.795058 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 2 07:53:27.795065 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jul 2 07:53:27.795071 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jul 2 07:53:27.795079 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 2 07:53:27.795085 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jul 2 07:53:27.795091 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jul 2 07:53:27.795098 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jul 2 07:53:27.795104 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jul 2 07:53:27.795110 kernel: No NUMA configuration found Jul 2 07:53:27.795117 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 2 07:53:27.795123 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 2 07:53:27.795130 kernel: Zone ranges: Jul 2 07:53:27.795137 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:53:27.795144 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 2 07:53:27.795150 kernel: Normal empty Jul 2 07:53:27.795157 kernel: Movable zone start for each node Jul 2 07:53:27.795163 kernel: Early memory node ranges Jul 2 07:53:27.795169 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 07:53:27.795175 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 2 07:53:27.795182 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 2 07:53:27.795188 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 2 07:53:27.795195 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 2 07:53:27.795202 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 2 07:53:27.795208 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 2 07:53:27.795214 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:53:27.795221 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 07:53:27.795227 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 2 07:53:27.795233 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:53:27.795240 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 2 07:53:27.795246 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 2 07:53:27.795253 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 2 07:53:27.795260 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 07:53:27.795266 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:53:27.795272 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:53:27.795279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 07:53:27.795285 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:53:27.795291 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:53:27.795298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:53:27.795304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:53:27.795311 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:53:27.795318 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 07:53:27.795324 kernel: TSC deadline timer available Jul 2 07:53:27.795330 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 07:53:27.795337 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 07:53:27.795343 kernel: kvm-guest: setup PV sched yield Jul 2 07:53:27.795349 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jul 2 07:53:27.795356 kernel: Booting paravirtualized kernel on KVM Jul 2 07:53:27.795363 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:53:27.795369 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Jul 2 07:53:27.795376 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Jul 2 07:53:27.795383 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Jul 2 07:53:27.795393 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 07:53:27.795400 kernel: kvm-guest: setup async PF for cpu 0 Jul 2 07:53:27.795407 kernel: kvm-guest: stealtime: cpu 0, msr 9b01c0c0 Jul 2 07:53:27.795413 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:53:27.795420 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:53:27.795427 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 2 07:53:27.795433 kernel: Policy zone: DMA32 Jul 2 07:53:27.795441 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:53:27.795448 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:53:27.795456 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:53:27.795463 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 07:53:27.795469 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:53:27.795477 kernel: Memory: 2398448K/2567000K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 168292K reserved, 0K cma-reserved) Jul 2 07:53:27.795485 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 07:53:27.795491 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:53:27.795498 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:53:27.795505 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:53:27.795512 kernel: rcu: RCU event tracing is enabled. Jul 2 07:53:27.795519 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 07:53:27.795526 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:53:27.795532 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:53:27.795547 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:53:27.795555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 07:53:27.795563 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 07:53:27.795569 kernel: Console: colour dummy device 80x25 Jul 2 07:53:27.795576 kernel: printk: console [ttyS0] enabled Jul 2 07:53:27.795583 kernel: ACPI: Core revision 20210730 Jul 2 07:53:27.795600 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 07:53:27.795615 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:53:27.795621 kernel: x2apic enabled Jul 2 07:53:27.795628 kernel: Switched APIC routing to physical x2apic. Jul 2 07:53:27.795635 kernel: kvm-guest: setup PV IPIs Jul 2 07:53:27.795643 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 07:53:27.795650 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 07:53:27.795657 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 2 07:53:27.795664 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 07:53:27.795671 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 07:53:27.795677 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 07:53:27.795684 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:53:27.795691 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 07:53:27.795698 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:53:27.795705 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:53:27.795712 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 07:53:27.795719 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 07:53:27.795726 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:53:27.795733 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 07:53:27.795740 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:53:27.795746 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:53:27.795753 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:53:27.795760 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:53:27.795768 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 07:53:27.795775 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:53:27.795781 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:53:27.795788 kernel: LSM: Security Framework initializing Jul 2 07:53:27.795795 kernel: SELinux: Initializing. Jul 2 07:53:27.795802 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:53:27.795808 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:53:27.795815 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 07:53:27.795824 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 07:53:27.795830 kernel: ... version: 0 Jul 2 07:53:27.795837 kernel: ... bit width: 48 Jul 2 07:53:27.795844 kernel: ... generic registers: 6 Jul 2 07:53:27.795850 kernel: ... value mask: 0000ffffffffffff Jul 2 07:53:27.795857 kernel: ... max period: 00007fffffffffff Jul 2 07:53:27.795864 kernel: ... fixed-purpose events: 0 Jul 2 07:53:27.795870 kernel: ... event mask: 000000000000003f Jul 2 07:53:27.795877 kernel: signal: max sigframe size: 1776 Jul 2 07:53:27.795884 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:53:27.795892 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:53:27.795898 kernel: x86: Booting SMP configuration: Jul 2 07:53:27.795905 kernel: .... node #0, CPUs: #1 Jul 2 07:53:27.795912 kernel: kvm-clock: cpu 1, msr 53192041, secondary cpu clock Jul 2 07:53:27.795918 kernel: kvm-guest: setup async PF for cpu 1 Jul 2 07:53:27.795925 kernel: kvm-guest: stealtime: cpu 1, msr 9b09c0c0 Jul 2 07:53:27.795931 kernel: #2 Jul 2 07:53:27.795938 kernel: kvm-clock: cpu 2, msr 53192081, secondary cpu clock Jul 2 07:53:27.795945 kernel: kvm-guest: setup async PF for cpu 2 Jul 2 07:53:27.795953 kernel: kvm-guest: stealtime: cpu 2, msr 9b11c0c0 Jul 2 07:53:27.795960 kernel: #3 Jul 2 07:53:27.795966 kernel: kvm-clock: cpu 3, msr 531920c1, secondary cpu clock Jul 2 07:53:27.795973 kernel: kvm-guest: setup async PF for cpu 3 Jul 2 07:53:27.795980 kernel: kvm-guest: stealtime: cpu 3, msr 9b19c0c0 Jul 2 07:53:27.795987 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 07:53:27.795993 kernel: smpboot: Max logical packages: 1 Jul 2 07:53:27.796000 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 2 07:53:27.796007 kernel: devtmpfs: initialized Jul 2 07:53:27.796015 kernel: x86/mm: Memory block size: 128MB Jul 2 07:53:27.796021 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 2 07:53:27.796028 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 2 07:53:27.796035 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 2 07:53:27.796042 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 2 07:53:27.796049 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 2 07:53:27.796056 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:53:27.796062 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 07:53:27.796069 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:53:27.796077 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:53:27.796084 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:53:27.796091 kernel: audit: type=2000 audit(1719906807.444:1): state=initialized audit_enabled=0 res=1 Jul 2 07:53:27.796097 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:53:27.796104 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:53:27.796110 kernel: cpuidle: using governor menu Jul 2 07:53:27.796117 kernel: ACPI: bus type PCI registered Jul 2 07:53:27.796124 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:53:27.796130 kernel: dca service started, version 1.12.1 Jul 2 07:53:27.796138 kernel: PCI: Using configuration type 1 for base access Jul 2 07:53:27.796145 kernel: PCI: Using configuration type 1 for extended access Jul 2 07:53:27.796152 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:53:27.796159 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:53:27.796165 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:53:27.796172 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:53:27.796179 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:53:27.796185 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:53:27.796192 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:53:27.796200 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:53:27.796207 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:53:27.796213 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:53:27.796220 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 07:53:27.796227 kernel: ACPI: Interpreter enabled Jul 2 07:53:27.796233 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 07:53:27.796240 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:53:27.796247 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:53:27.796254 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 07:53:27.796262 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:53:27.796369 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:53:27.796380 kernel: acpiphp: Slot [3] registered Jul 2 07:53:27.796387 kernel: acpiphp: Slot [4] registered Jul 2 07:53:27.796394 kernel: acpiphp: Slot [5] registered Jul 2 07:53:27.796401 kernel: acpiphp: Slot [6] registered Jul 2 07:53:27.796407 kernel: acpiphp: Slot [7] registered Jul 2 07:53:27.796414 kernel: acpiphp: Slot [8] registered Jul 2 07:53:27.796421 kernel: acpiphp: Slot [9] registered Jul 2 07:53:27.796429 kernel: acpiphp: Slot [10] registered Jul 2 07:53:27.796436 kernel: acpiphp: Slot [11] registered Jul 2 07:53:27.796443 kernel: acpiphp: Slot [12] registered Jul 2 07:53:27.796449 kernel: acpiphp: Slot [13] registered Jul 2 07:53:27.796456 kernel: acpiphp: Slot [14] registered Jul 2 07:53:27.796463 kernel: acpiphp: Slot [15] registered Jul 2 07:53:27.796469 kernel: acpiphp: Slot [16] registered Jul 2 07:53:27.796476 kernel: acpiphp: Slot [17] registered Jul 2 07:53:27.796483 kernel: acpiphp: Slot [18] registered Jul 2 07:53:27.796493 kernel: acpiphp: Slot [19] registered Jul 2 07:53:27.796501 kernel: acpiphp: Slot [20] registered Jul 2 07:53:27.796508 kernel: acpiphp: Slot [21] registered Jul 2 07:53:27.796516 kernel: acpiphp: Slot [22] registered Jul 2 07:53:27.796523 kernel: acpiphp: Slot [23] registered Jul 2 07:53:27.796529 kernel: acpiphp: Slot [24] registered Jul 2 07:53:27.796536 kernel: acpiphp: Slot [25] registered Jul 2 07:53:27.796552 kernel: acpiphp: Slot [26] registered Jul 2 07:53:27.796558 kernel: acpiphp: Slot [27] registered Jul 2 07:53:27.796565 kernel: acpiphp: Slot [28] registered Jul 2 07:53:27.796573 kernel: acpiphp: Slot [29] registered Jul 2 07:53:27.796580 kernel: acpiphp: Slot [30] registered Jul 2 07:53:27.796586 kernel: acpiphp: Slot [31] registered Jul 2 07:53:27.796619 kernel: PCI host bridge to bus 0000:00 Jul 2 07:53:27.796699 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:53:27.796760 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:53:27.796821 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:53:27.796882 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 07:53:27.796943 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jul 2 07:53:27.797001 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:53:27.797083 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:53:27.797158 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 07:53:27.797236 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 07:53:27.797303 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 07:53:27.797377 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 07:53:27.797443 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 07:53:27.797510 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 07:53:27.797588 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 07:53:27.797676 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:53:27.797745 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 07:53:27.797815 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 07:53:27.797889 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 07:53:27.797956 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 2 07:53:27.798024 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jul 2 07:53:27.798090 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 2 07:53:27.798156 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jul 2 07:53:27.798258 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 07:53:27.798339 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 07:53:27.798409 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 07:53:27.798479 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 2 07:53:27.798572 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 2 07:53:27.798664 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 07:53:27.798734 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 07:53:27.798803 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 2 07:53:27.798873 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 2 07:53:27.798948 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:53:27.799018 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 07:53:27.799086 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jul 2 07:53:27.799153 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 2 07:53:27.799221 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 2 07:53:27.799231 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:53:27.799240 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:53:27.799247 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:53:27.799253 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:53:27.799260 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:53:27.799267 kernel: iommu: Default domain type: Translated Jul 2 07:53:27.799274 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:53:27.799341 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 07:53:27.799409 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 07:53:27.799476 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 07:53:27.799488 kernel: vgaarb: loaded Jul 2 07:53:27.799495 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:53:27.799502 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:53:27.799509 kernel: PTP clock support registered Jul 2 07:53:27.799516 kernel: Registered efivars operations Jul 2 07:53:27.799523 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:53:27.799529 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:53:27.799536 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 2 07:53:27.799550 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 2 07:53:27.799558 kernel: e820: reserve RAM buffer [mem 0x9b3bd018-0x9bffffff] Jul 2 07:53:27.799564 kernel: e820: reserve RAM buffer [mem 0x9b3fa018-0x9bffffff] Jul 2 07:53:27.799571 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 2 07:53:27.799578 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 2 07:53:27.799584 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 07:53:27.799601 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 07:53:27.799607 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:53:27.799614 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:53:27.799621 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:53:27.799630 kernel: pnp: PnP ACPI init Jul 2 07:53:27.799709 kernel: pnp 00:02: [dma 2] Jul 2 07:53:27.799719 kernel: pnp: PnP ACPI: found 6 devices Jul 2 07:53:27.799726 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:53:27.799733 kernel: NET: Registered PF_INET protocol family Jul 2 07:53:27.799739 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:53:27.799746 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 07:53:27.799753 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:53:27.799762 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 07:53:27.799769 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 07:53:27.799776 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 07:53:27.799783 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:53:27.799790 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:53:27.799797 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:53:27.799803 kernel: NET: Registered PF_XDP protocol family Jul 2 07:53:27.799888 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 2 07:53:27.799968 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 2 07:53:27.800031 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:53:27.800093 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:53:27.800154 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:53:27.800215 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 07:53:27.800290 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jul 2 07:53:27.800364 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 07:53:27.800433 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:53:27.800506 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Jul 2 07:53:27.800515 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:53:27.800523 kernel: Initialise system trusted keyrings Jul 2 07:53:27.800530 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 07:53:27.800537 kernel: Key type asymmetric registered Jul 2 07:53:27.800553 kernel: Asymmetric key parser 'x509' registered Jul 2 07:53:27.800560 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:53:27.800567 kernel: io scheduler mq-deadline registered Jul 2 07:53:27.800574 kernel: io scheduler kyber registered Jul 2 07:53:27.800583 kernel: io scheduler bfq registered Jul 2 07:53:27.800607 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:53:27.800615 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:53:27.800622 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 07:53:27.800629 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:53:27.800637 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:53:27.800644 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:53:27.800651 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:53:27.800658 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:53:27.800667 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:53:27.800742 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 07:53:27.800755 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:53:27.800816 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 07:53:27.800881 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T07:53:27 UTC (1719906807) Jul 2 07:53:27.800952 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 07:53:27.800961 kernel: efifb: probing for efifb Jul 2 07:53:27.800969 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 2 07:53:27.800976 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 2 07:53:27.800983 kernel: efifb: scrolling: redraw Jul 2 07:53:27.800990 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 07:53:27.800998 kernel: Console: switching to colour frame buffer device 160x50 Jul 2 07:53:27.801005 kernel: fb0: EFI VGA frame buffer device Jul 2 07:53:27.801014 kernel: pstore: Registered efi as persistent store backend Jul 2 07:53:27.801021 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:53:27.801028 kernel: Segment Routing with IPv6 Jul 2 07:53:27.801035 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:53:27.801042 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:53:27.801050 kernel: Key type dns_resolver registered Jul 2 07:53:27.801057 kernel: IPI shorthand broadcast: enabled Jul 2 07:53:27.801064 kernel: sched_clock: Marking stable (408024846, 123641173)->(573126637, -41460618) Jul 2 07:53:27.801071 kernel: registered taskstats version 1 Jul 2 07:53:27.801080 kernel: Loading compiled-in X.509 certificates Jul 2 07:53:27.801087 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:53:27.801094 kernel: Key type .fscrypt registered Jul 2 07:53:27.801101 kernel: Key type fscrypt-provisioning registered Jul 2 07:53:27.801108 kernel: pstore: Using crash dump compression: deflate Jul 2 07:53:27.801115 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 07:53:27.801122 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:53:27.801130 kernel: ima: No architecture policies found Jul 2 07:53:27.801137 kernel: clk: Disabling unused clocks Jul 2 07:53:27.801145 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:53:27.801152 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:53:27.801160 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:53:27.801167 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:53:27.801174 kernel: Run /init as init process Jul 2 07:53:27.801181 kernel: with arguments: Jul 2 07:53:27.801188 kernel: /init Jul 2 07:53:27.801196 kernel: with environment: Jul 2 07:53:27.801203 kernel: HOME=/ Jul 2 07:53:27.801211 kernel: TERM=linux Jul 2 07:53:27.801218 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:53:27.801227 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:53:27.801236 systemd[1]: Detected virtualization kvm. Jul 2 07:53:27.801244 systemd[1]: Detected architecture x86-64. Jul 2 07:53:27.801251 systemd[1]: Running in initrd. Jul 2 07:53:27.801259 systemd[1]: No hostname configured, using default hostname. Jul 2 07:53:27.801266 systemd[1]: Hostname set to . Jul 2 07:53:27.801275 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:53:27.801282 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:53:27.801290 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:53:27.801297 systemd[1]: Reached target cryptsetup.target. Jul 2 07:53:27.801305 systemd[1]: Reached target paths.target. Jul 2 07:53:27.801312 systemd[1]: Reached target slices.target. Jul 2 07:53:27.801319 systemd[1]: Reached target swap.target. Jul 2 07:53:27.801327 systemd[1]: Reached target timers.target. Jul 2 07:53:27.801336 systemd[1]: Listening on iscsid.socket. Jul 2 07:53:27.801343 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:53:27.801351 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:53:27.801358 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:53:27.801366 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:53:27.801373 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:53:27.801381 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:53:27.801388 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:53:27.801397 systemd[1]: Reached target sockets.target. Jul 2 07:53:27.801404 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:53:27.801412 systemd[1]: Finished network-cleanup.service. Jul 2 07:53:27.801420 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:53:27.801427 systemd[1]: Starting systemd-journald.service... Jul 2 07:53:27.801435 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:53:27.801442 systemd[1]: Starting systemd-resolved.service... Jul 2 07:53:27.801449 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:53:27.801457 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:53:27.801466 kernel: audit: type=1130 audit(1719906807.794:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.801473 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:53:27.801481 kernel: audit: type=1130 audit(1719906807.799:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.801492 systemd-journald[198]: Journal started Jul 2 07:53:27.801528 systemd-journald[198]: Runtime Journal (/run/log/journal/305d42d39b594c7a9fdf35cc15dc1d80) is 6.0M, max 48.4M, 42.4M free. Jul 2 07:53:27.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.803619 systemd[1]: Started systemd-journald.service. Jul 2 07:53:27.803839 systemd-modules-load[199]: Inserted module 'overlay' Jul 2 07:53:27.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.806805 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:53:27.813877 kernel: audit: type=1130 audit(1719906807.804:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.813892 kernel: audit: type=1130 audit(1719906807.808:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.811911 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:53:27.813133 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:53:27.820287 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:53:27.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.825634 kernel: audit: type=1130 audit(1719906807.820:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.825656 systemd-resolved[200]: Positive Trust Anchors: Jul 2 07:53:27.825664 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:53:27.825691 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:53:27.827813 systemd-resolved[200]: Defaulting to hostname 'linux'. Jul 2 07:53:27.828477 systemd[1]: Started systemd-resolved.service. Jul 2 07:53:27.828825 systemd[1]: Reached target nss-lookup.target. Jul 2 07:53:27.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.835616 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:53:27.839810 kernel: audit: type=1130 audit(1719906807.828:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.839826 kernel: audit: type=1130 audit(1719906807.838:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.839433 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:53:27.846610 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:53:27.847057 dracut-cmdline[216]: dracut-dracut-053 Jul 2 07:53:27.849339 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:53:27.855059 systemd-modules-load[199]: Inserted module 'br_netfilter' Jul 2 07:53:27.855960 kernel: Bridge firewalling registered Jul 2 07:53:27.871614 kernel: SCSI subsystem initialized Jul 2 07:53:27.882704 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:53:27.882725 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:53:27.883969 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:53:27.886603 systemd-modules-load[199]: Inserted module 'dm_multipath' Jul 2 07:53:27.887192 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:53:27.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.888933 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:53:27.893617 kernel: audit: type=1130 audit(1719906807.887:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.894497 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:53:27.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.899608 kernel: audit: type=1130 audit(1719906807.895:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.904613 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:53:27.920616 kernel: iscsi: registered transport (tcp) Jul 2 07:53:27.941614 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:53:27.941630 kernel: QLogic iSCSI HBA Driver Jul 2 07:53:27.968732 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:53:27.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:27.970968 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:53:28.015614 kernel: raid6: avx2x4 gen() 30955 MB/s Jul 2 07:53:28.032610 kernel: raid6: avx2x4 xor() 8579 MB/s Jul 2 07:53:28.049609 kernel: raid6: avx2x2 gen() 32542 MB/s Jul 2 07:53:28.066609 kernel: raid6: avx2x2 xor() 19191 MB/s Jul 2 07:53:28.083609 kernel: raid6: avx2x1 gen() 26472 MB/s Jul 2 07:53:28.100609 kernel: raid6: avx2x1 xor() 15125 MB/s Jul 2 07:53:28.117609 kernel: raid6: sse2x4 gen() 14746 MB/s Jul 2 07:53:28.134610 kernel: raid6: sse2x4 xor() 7442 MB/s Jul 2 07:53:28.151611 kernel: raid6: sse2x2 gen() 16381 MB/s Jul 2 07:53:28.168609 kernel: raid6: sse2x2 xor() 9821 MB/s Jul 2 07:53:28.185609 kernel: raid6: sse2x1 gen() 12364 MB/s Jul 2 07:53:28.203000 kernel: raid6: sse2x1 xor() 7774 MB/s Jul 2 07:53:28.203011 kernel: raid6: using algorithm avx2x2 gen() 32542 MB/s Jul 2 07:53:28.203020 kernel: raid6: .... xor() 19191 MB/s, rmw enabled Jul 2 07:53:28.203725 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:53:28.215613 kernel: xor: automatically using best checksumming function avx Jul 2 07:53:28.304613 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:53:28.312933 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:53:28.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.314000 audit: BPF prog-id=7 op=LOAD Jul 2 07:53:28.314000 audit: BPF prog-id=8 op=LOAD Jul 2 07:53:28.315453 systemd[1]: Starting systemd-udevd.service... Jul 2 07:53:28.327545 systemd-udevd[400]: Using default interface naming scheme 'v252'. Jul 2 07:53:28.331293 systemd[1]: Started systemd-udevd.service. Jul 2 07:53:28.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.335910 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:53:28.345315 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Jul 2 07:53:28.369792 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:53:28.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.371320 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:53:28.406541 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:53:28.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:28.437624 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:53:28.448241 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:53:28.448285 kernel: AES CTR mode by8 optimization enabled Jul 2 07:53:28.449717 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 07:53:28.461352 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:53:28.461373 kernel: GPT:9289727 != 19775487 Jul 2 07:53:28.461390 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:53:28.461402 kernel: GPT:9289727 != 19775487 Jul 2 07:53:28.461413 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:53:28.461424 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:53:28.466665 kernel: libata version 3.00 loaded. Jul 2 07:53:28.470607 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 07:53:28.473607 kernel: scsi host0: ata_piix Jul 2 07:53:28.476611 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (450) Jul 2 07:53:28.476643 kernel: scsi host1: ata_piix Jul 2 07:53:28.477936 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 07:53:28.477956 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 07:53:28.478294 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:53:28.481345 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:53:28.485814 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:53:28.495240 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:53:28.499049 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:53:28.500734 systemd[1]: Starting disk-uuid.service... Jul 2 07:53:28.506744 disk-uuid[519]: Primary Header is updated. Jul 2 07:53:28.506744 disk-uuid[519]: Secondary Entries is updated. Jul 2 07:53:28.506744 disk-uuid[519]: Secondary Header is updated. Jul 2 07:53:28.510618 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:53:28.513610 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:53:28.633700 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 07:53:28.635662 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 07:53:28.667943 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 07:53:28.668088 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 07:53:28.685616 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 07:53:29.514464 disk-uuid[520]: The operation has completed successfully. Jul 2 07:53:29.515858 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:53:29.536954 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:53:29.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.537054 systemd[1]: Finished disk-uuid.service. Jul 2 07:53:29.542059 systemd[1]: Starting verity-setup.service... Jul 2 07:53:29.555618 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 07:53:29.575265 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:53:29.577873 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:53:29.580012 systemd[1]: Finished verity-setup.service. Jul 2 07:53:29.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.636617 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:53:29.636748 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:53:29.638245 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:53:29.640182 systemd[1]: Starting ignition-setup.service... Jul 2 07:53:29.642116 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:53:29.650322 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:53:29.650348 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:53:29.650358 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:53:29.658069 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:53:29.666126 systemd[1]: Finished ignition-setup.service. Jul 2 07:53:29.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.668403 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:53:29.704441 ignition[634]: Ignition 2.14.0 Jul 2 07:53:29.704452 ignition[634]: Stage: fetch-offline Jul 2 07:53:29.704534 ignition[634]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:53:29.704542 ignition[634]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:53:29.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.707084 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:53:29.710000 audit: BPF prog-id=9 op=LOAD Jul 2 07:53:29.704646 ignition[634]: parsed url from cmdline: "" Jul 2 07:53:29.704649 ignition[634]: no config URL provided Jul 2 07:53:29.704653 ignition[634]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:53:29.711176 systemd[1]: Starting systemd-networkd.service... Jul 2 07:53:29.704659 ignition[634]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:53:29.704674 ignition[634]: op(1): [started] loading QEMU firmware config module Jul 2 07:53:29.704681 ignition[634]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 07:53:29.708474 ignition[634]: op(1): [finished] loading QEMU firmware config module Jul 2 07:53:29.754179 ignition[634]: parsing config with SHA512: 7d8c8254dc3cf5097fb600b199a7ea97e8d8bae3918ea427b06c1ba8b5e9e4e36b8c071ea24ca1adea6bab10a25def4f1cff01a29afa4ae7df118ec1218b4bec Jul 2 07:53:29.760109 unknown[634]: fetched base config from "system" Jul 2 07:53:29.760121 unknown[634]: fetched user config from "qemu" Jul 2 07:53:29.760640 ignition[634]: fetch-offline: fetch-offline passed Jul 2 07:53:29.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.761718 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:53:29.760688 ignition[634]: Ignition finished successfully Jul 2 07:53:29.775639 systemd-networkd[713]: lo: Link UP Jul 2 07:53:29.775648 systemd-networkd[713]: lo: Gained carrier Jul 2 07:53:29.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.776017 systemd-networkd[713]: Enumeration completed Jul 2 07:53:29.776095 systemd[1]: Started systemd-networkd.service. Jul 2 07:53:29.776211 systemd-networkd[713]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:53:29.777565 systemd[1]: Reached target network.target. Jul 2 07:53:29.777634 systemd-networkd[713]: eth0: Link UP Jul 2 07:53:29.777637 systemd-networkd[713]: eth0: Gained carrier Jul 2 07:53:29.779032 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 07:53:29.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.788536 ignition[715]: Ignition 2.14.0 Jul 2 07:53:29.779773 systemd[1]: Starting ignition-kargs.service... Jul 2 07:53:29.790821 iscsid[724]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:53:29.790821 iscsid[724]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 07:53:29.790821 iscsid[724]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:53:29.790821 iscsid[724]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:53:29.790821 iscsid[724]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:53:29.790821 iscsid[724]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:53:29.790821 iscsid[724]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:53:29.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.788542 ignition[715]: Stage: kargs Jul 2 07:53:29.781188 systemd[1]: Starting iscsiuio.service... Jul 2 07:53:29.788637 ignition[715]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:53:29.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.785125 systemd[1]: Started iscsiuio.service. Jul 2 07:53:29.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.788645 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:53:29.787517 systemd[1]: Starting iscsid.service... Jul 2 07:53:29.789569 ignition[715]: kargs: kargs passed Jul 2 07:53:29.790918 systemd[1]: Finished ignition-kargs.service. Jul 2 07:53:29.789625 ignition[715]: Ignition finished successfully Jul 2 07:53:29.791853 systemd[1]: Started iscsid.service. Jul 2 07:53:29.805034 ignition[726]: Ignition 2.14.0 Jul 2 07:53:29.792669 systemd-networkd[713]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:53:29.805040 ignition[726]: Stage: disks Jul 2 07:53:29.794374 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:53:29.805122 ignition[726]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:53:29.798235 systemd[1]: Starting ignition-disks.service... Jul 2 07:53:29.831386 systemd-fsck[746]: ROOT: clean, 614/553520 files, 56020/553472 blocks Jul 2 07:53:29.805130 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:53:29.805394 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:53:29.806024 ignition[726]: disks: disks passed Jul 2 07:53:29.806492 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:53:29.806059 ignition[726]: Ignition finished successfully Jul 2 07:53:29.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.808856 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:53:29.809751 systemd[1]: Reached target remote-fs.target. Jul 2 07:53:29.811076 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:53:29.812009 systemd[1]: Finished ignition-disks.service. Jul 2 07:53:29.813465 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:53:29.846722 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:53:29.815203 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:53:29.815789 systemd[1]: Reached target local-fs.target. Jul 2 07:53:29.815949 systemd[1]: Reached target sysinit.target. Jul 2 07:53:29.816118 systemd[1]: Reached target basic.target. Jul 2 07:53:29.818039 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:53:29.819069 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:53:29.835718 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:53:29.838221 systemd[1]: Mounting sysroot.mount... Jul 2 07:53:29.845401 systemd[1]: Mounted sysroot.mount. Jul 2 07:53:29.846767 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:53:29.849578 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:53:29.859436 initrd-setup-root[756]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:53:29.851215 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 07:53:29.851244 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:53:29.851263 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:53:29.865220 initrd-setup-root[764]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:53:29.853152 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:53:29.867058 initrd-setup-root[772]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:53:29.854671 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:53:29.869035 initrd-setup-root[780]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:53:29.890569 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:53:29.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.892082 systemd[1]: Starting ignition-mount.service... Jul 2 07:53:29.893366 systemd[1]: Starting sysroot-boot.service... Jul 2 07:53:29.897148 bash[797]: umount: /sysroot/usr/share/oem: not mounted. Jul 2 07:53:29.905337 ignition[799]: INFO : Ignition 2.14.0 Jul 2 07:53:29.905337 ignition[799]: INFO : Stage: mount Jul 2 07:53:29.906939 ignition[799]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:53:29.906939 ignition[799]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:53:29.906939 ignition[799]: INFO : mount: mount passed Jul 2 07:53:29.906939 ignition[799]: INFO : Ignition finished successfully Jul 2 07:53:29.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:29.907550 systemd[1]: Finished ignition-mount.service. Jul 2 07:53:29.911860 systemd[1]: Finished sysroot-boot.service. Jul 2 07:53:29.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:30.587740 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:53:30.594611 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (807) Jul 2 07:53:30.596839 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:53:30.596860 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:53:30.596870 kernel: BTRFS info (device vda6): has skinny extents Jul 2 07:53:30.600818 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:53:30.603395 systemd[1]: Starting ignition-files.service... Jul 2 07:53:30.618092 ignition[827]: INFO : Ignition 2.14.0 Jul 2 07:53:30.618092 ignition[827]: INFO : Stage: files Jul 2 07:53:30.619723 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:53:30.619723 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:53:30.619723 ignition[827]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:53:30.623038 ignition[827]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:53:30.623038 ignition[827]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:53:30.625975 ignition[827]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:53:30.627583 ignition[827]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:53:30.629495 unknown[827]: wrote ssh authorized keys file for user: core Jul 2 07:53:30.630690 ignition[827]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:53:30.632742 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 07:53:30.634793 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 07:53:30.636578 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:53:30.638472 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:53:30.668156 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 07:53:30.724304 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:53:30.726275 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:53:30.728029 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:53:30.728029 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:53:30.731843 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:53:30.733530 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:53:30.735638 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:53:30.735638 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:53:30.735638 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:53:30.740882 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:53:30.740882 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:53:30.740882 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:53:30.740882 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:53:30.740882 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:53:30.740882 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 07:53:31.143874 systemd-networkd[713]: eth0: Gained IPv6LL Jul 2 07:53:31.167873 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 07:53:31.534475 ignition[827]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:53:31.534475 ignition[827]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 2 07:53:31.538019 ignition[827]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 07:53:31.540363 ignition[827]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 07:53:31.540363 ignition[827]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 2 07:53:31.540363 ignition[827]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 2 07:53:31.545109 ignition[827]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:53:31.545109 ignition[827]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:53:31.545109 ignition[827]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 2 07:53:31.545109 ignition[827]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 2 07:53:31.551152 ignition[827]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:53:31.551152 ignition[827]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:53:31.551152 ignition[827]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 2 07:53:31.551152 ignition[827]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:53:31.557589 ignition[827]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:53:31.557589 ignition[827]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 07:53:31.557589 ignition[827]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:53:31.580660 ignition[827]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:53:31.580660 ignition[827]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 07:53:31.583761 ignition[827]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:53:31.585418 ignition[827]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:53:31.585418 ignition[827]: INFO : files: files passed Jul 2 07:53:31.587829 ignition[827]: INFO : Ignition finished successfully Jul 2 07:53:31.589726 systemd[1]: Finished ignition-files.service. Jul 2 07:53:31.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.591331 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:53:31.591932 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:53:31.592403 systemd[1]: Starting ignition-quench.service... Jul 2 07:53:31.595881 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:53:31.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.595971 systemd[1]: Finished ignition-quench.service. Jul 2 07:53:31.602489 initrd-setup-root-after-ignition[852]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 2 07:53:31.605155 initrd-setup-root-after-ignition[854]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:53:31.607103 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:53:31.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.607641 systemd[1]: Reached target ignition-complete.target. Jul 2 07:53:31.610655 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:53:31.622517 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:53:31.622632 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:53:31.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.623220 systemd[1]: Reached target initrd-fs.target. Jul 2 07:53:31.625150 systemd[1]: Reached target initrd.target. Jul 2 07:53:31.626636 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:53:31.627188 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:53:31.636539 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:53:31.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.637722 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:53:31.645943 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:53:31.646261 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:53:31.647938 systemd[1]: Stopped target timers.target. Jul 2 07:53:31.649340 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:53:31.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.649428 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:53:31.650976 systemd[1]: Stopped target initrd.target. Jul 2 07:53:31.652558 systemd[1]: Stopped target basic.target. Jul 2 07:53:31.653959 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:53:31.654321 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:53:31.657102 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:53:31.657429 systemd[1]: Stopped target remote-fs.target. Jul 2 07:53:31.661806 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:53:31.662139 systemd[1]: Stopped target sysinit.target. Jul 2 07:53:31.664820 systemd[1]: Stopped target local-fs.target. Jul 2 07:53:31.665130 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:53:31.666890 systemd[1]: Stopped target swap.target. Jul 2 07:53:31.668204 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:53:31.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.668295 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:53:31.669740 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:53:31.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.671267 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:53:31.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.671382 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:53:31.673017 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:53:31.673139 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:53:31.674287 systemd[1]: Stopped target paths.target. Jul 2 07:53:31.675889 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:53:31.680636 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:53:31.681125 systemd[1]: Stopped target slices.target. Jul 2 07:53:31.682920 systemd[1]: Stopped target sockets.target. Jul 2 07:53:31.684116 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:53:31.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.684207 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:53:31.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.685508 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:53:31.691005 iscsid[724]: iscsid shutting down. Jul 2 07:53:31.685601 systemd[1]: Stopped ignition-files.service. Jul 2 07:53:31.688229 systemd[1]: Stopping ignition-mount.service... Jul 2 07:53:31.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.689490 systemd[1]: Stopping iscsid.service... Jul 2 07:53:31.697004 ignition[867]: INFO : Ignition 2.14.0 Jul 2 07:53:31.697004 ignition[867]: INFO : Stage: umount Jul 2 07:53:31.697004 ignition[867]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:53:31.697004 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:53:31.697004 ignition[867]: INFO : umount: umount passed Jul 2 07:53:31.697004 ignition[867]: INFO : Ignition finished successfully Jul 2 07:53:31.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.691990 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:53:31.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.692773 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:53:31.692942 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:53:31.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.694683 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:53:31.694769 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:53:31.697704 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:53:31.697782 systemd[1]: Stopped iscsid.service. Jul 2 07:53:31.698898 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:53:31.698960 systemd[1]: Stopped ignition-mount.service. Jul 2 07:53:31.701004 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:53:31.701071 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:53:31.703687 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:53:31.703711 systemd[1]: Closed iscsid.socket. Jul 2 07:53:31.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.704458 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:53:31.704487 systemd[1]: Stopped ignition-disks.service. Jul 2 07:53:31.706113 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:53:31.706142 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:53:31.706955 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:53:31.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.706983 systemd[1]: Stopped ignition-setup.service. Jul 2 07:53:31.707473 systemd[1]: Stopping iscsiuio.service... Jul 2 07:53:31.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.710027 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:53:31.710103 systemd[1]: Stopped iscsiuio.service. Jul 2 07:53:31.711699 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:53:31.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.711882 systemd[1]: Stopped target network.target. Jul 2 07:53:31.713719 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:53:31.713743 systemd[1]: Closed iscsiuio.socket. Jul 2 07:53:31.741000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:53:31.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.715256 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:53:31.716860 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:53:31.720654 systemd-networkd[713]: eth0: DHCPv6 lease lost Jul 2 07:53:31.743000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:53:31.721992 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:53:31.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.722073 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:53:31.724719 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:53:31.724757 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:53:31.727212 systemd[1]: Stopping network-cleanup.service... Jul 2 07:53:31.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.728900 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:53:31.728948 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:53:31.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.730625 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:53:31.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.730665 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:53:31.732531 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:53:31.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.732566 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:53:31.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.734222 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:53:31.736506 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:53:31.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.736953 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:53:31.737047 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:53:31.741052 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:53:31.741150 systemd[1]: Stopped network-cleanup.service. Jul 2 07:53:31.744317 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:53:31.744445 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:53:31.746816 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:53:31.746878 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:53:31.748511 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:53:31.748539 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:53:31.750317 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:53:31.750359 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:53:31.751912 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:53:31.751946 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:53:31.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.753456 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:53:31.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:31.753490 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:53:31.756051 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:53:31.756965 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 07:53:31.757012 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 07:53:31.758968 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:53:31.759006 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:53:31.761291 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:53:31.788000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:53:31.788000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:53:31.788000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:53:31.761334 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:53:31.789000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:53:31.789000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:53:31.763251 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 07:53:31.763660 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:53:31.763733 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:53:31.777230 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:53:31.777323 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:53:31.778780 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:53:31.780239 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:53:31.780273 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:53:31.781562 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:53:31.786665 systemd[1]: Switching root. Jul 2 07:53:31.806279 systemd-journald[198]: Journal stopped Jul 2 07:53:34.364013 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Jul 2 07:53:34.364067 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:53:34.364083 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:53:34.364093 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:53:34.364102 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:53:34.364112 kernel: SELinux: policy capability open_perms=1 Jul 2 07:53:34.364122 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:53:34.364137 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:53:34.364150 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:53:34.364159 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:53:34.364169 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:53:34.364178 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:53:34.364188 systemd[1]: Successfully loaded SELinux policy in 39.215ms. Jul 2 07:53:34.364205 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.194ms. Jul 2 07:53:34.364218 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:53:34.364229 systemd[1]: Detected virtualization kvm. Jul 2 07:53:34.364239 systemd[1]: Detected architecture x86-64. Jul 2 07:53:34.364249 systemd[1]: Detected first boot. Jul 2 07:53:34.364259 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:53:34.364273 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:53:34.364284 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:53:34.364294 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:53:34.364310 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:53:34.364322 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:53:34.364333 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:53:34.364343 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 2 07:53:34.364353 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:53:34.364369 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:53:34.364381 systemd[1]: Created slice system-getty.slice. Jul 2 07:53:34.364391 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:53:34.364401 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:53:34.364412 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:53:34.364422 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:53:34.364432 systemd[1]: Created slice user.slice. Jul 2 07:53:34.364442 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:53:34.364453 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:53:34.364463 systemd[1]: Set up automount boot.automount. Jul 2 07:53:34.364474 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:53:34.364485 systemd[1]: Reached target integritysetup.target. Jul 2 07:53:34.364495 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:53:34.364504 systemd[1]: Reached target remote-fs.target. Jul 2 07:53:34.364514 systemd[1]: Reached target slices.target. Jul 2 07:53:34.364524 systemd[1]: Reached target swap.target. Jul 2 07:53:34.364536 systemd[1]: Reached target torcx.target. Jul 2 07:53:34.364546 systemd[1]: Reached target veritysetup.target. Jul 2 07:53:34.364557 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:53:34.364567 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:53:34.364577 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:53:34.364587 kernel: kauditd_printk_skb: 83 callbacks suppressed Jul 2 07:53:34.364611 kernel: audit: type=1400 audit(1719906814.264:87): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:53:34.364621 kernel: audit: type=1335 audit(1719906814.264:88): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 07:53:34.364631 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:53:34.364641 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:53:34.364651 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:53:34.364663 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:53:34.364673 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:53:34.364684 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:53:34.364694 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:53:34.364704 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:53:34.364714 systemd[1]: Mounting media.mount... Jul 2 07:53:34.364724 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:34.364735 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:53:34.364745 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:53:34.364755 systemd[1]: Mounting tmp.mount... Jul 2 07:53:34.364767 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:53:34.364777 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:34.364787 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:53:34.364797 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:53:34.364806 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:34.364816 systemd[1]: Starting modprobe@drm.service... Jul 2 07:53:34.364827 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:34.364837 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:53:34.364848 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:34.364859 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:53:34.364869 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 07:53:34.364879 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 2 07:53:34.364890 kernel: loop: module loaded Jul 2 07:53:34.364900 systemd[1]: Starting systemd-journald.service... Jul 2 07:53:34.364910 kernel: fuse: init (API version 7.34) Jul 2 07:53:34.364920 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:53:34.364930 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:53:34.364941 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:53:34.364951 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:53:34.364962 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:34.364971 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:53:34.364982 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:53:34.364992 systemd[1]: Mounted media.mount. Jul 2 07:53:34.365002 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:53:34.365012 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:53:34.365022 systemd[1]: Mounted tmp.mount. Jul 2 07:53:34.365033 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:53:34.365044 kernel: audit: type=1130 audit(1719906814.357:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.365054 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:53:34.365064 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:53:34.365074 kernel: audit: type=1305 audit(1719906814.362:90): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:53:34.365087 systemd-journald[1008]: Journal started Jul 2 07:53:34.365123 systemd-journald[1008]: Runtime Journal (/run/log/journal/305d42d39b594c7a9fdf35cc15dc1d80) is 6.0M, max 48.4M, 42.4M free. Jul 2 07:53:34.264000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:53:34.264000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 07:53:34.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.362000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:53:34.362000 audit[1008]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe7b74f9c0 a2=4000 a3=7ffe7b74fa5c items=0 ppid=1 pid=1008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:34.365613 kernel: audit: type=1300 audit(1719906814.362:90): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe7b74f9c0 a2=4000 a3=7ffe7b74fa5c items=0 ppid=1 pid=1008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:34.365630 kernel: audit: type=1327 audit(1719906814.362:90): proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:53:34.362000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:53:34.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.376234 kernel: audit: type=1130 audit(1719906814.371:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.376263 systemd[1]: Started systemd-journald.service. Jul 2 07:53:34.376283 kernel: audit: type=1131 audit(1719906814.371:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.381057 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:53:34.384092 kernel: audit: type=1130 audit(1719906814.379:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.384292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:34.384577 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:34.387965 kernel: audit: type=1130 audit(1719906814.383:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.388173 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:53:34.388306 systemd[1]: Finished modprobe@drm.service. Jul 2 07:53:34.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.389274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:34.389412 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:34.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.390467 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:53:34.390604 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:53:34.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.391636 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:34.391782 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:34.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.392843 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:53:34.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.394089 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:53:34.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.395335 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:53:34.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.396462 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:53:34.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.397715 systemd[1]: Reached target network-pre.target. Jul 2 07:53:34.399550 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:53:34.401162 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:53:34.401930 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:53:34.403214 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:53:34.404917 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:53:34.405773 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:53:34.412995 systemd-journald[1008]: Time spent on flushing to /var/log/journal/305d42d39b594c7a9fdf35cc15dc1d80 is 14.464ms for 1102 entries. Jul 2 07:53:34.412995 systemd-journald[1008]: System Journal (/var/log/journal/305d42d39b594c7a9fdf35cc15dc1d80) is 8.0M, max 195.6M, 187.6M free. Jul 2 07:53:34.448434 systemd-journald[1008]: Received client request to flush runtime journal. Jul 2 07:53:34.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.406702 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:53:34.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.407515 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:53:34.408473 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:53:34.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.410916 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:53:34.413206 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:53:34.451905 udevadm[1052]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 07:53:34.417020 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:53:34.418071 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:53:34.419177 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:53:34.422096 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:53:34.425485 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:53:34.431037 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:53:34.432956 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:53:34.449367 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:53:34.450572 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:53:34.924720 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:53:34.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.926974 systemd[1]: Starting systemd-udevd.service... Jul 2 07:53:34.943470 systemd-udevd[1063]: Using default interface naming scheme 'v252'. Jul 2 07:53:34.955091 systemd[1]: Started systemd-udevd.service. Jul 2 07:53:34.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:34.958193 systemd[1]: Starting systemd-networkd.service... Jul 2 07:53:34.969179 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:53:35.002943 systemd[1]: Found device dev-ttyS0.device. Jul 2 07:53:35.011960 systemd[1]: Started systemd-userdbd.service. Jul 2 07:53:35.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.019616 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:53:35.027112 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:53:35.027648 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:53:35.040000 audit[1078]: AVC avc: denied { confidentiality } for pid=1078 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:53:35.040000 audit[1078]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e98e1db700 a1=3207c a2=7feeeb600bc5 a3=5 items=108 ppid=1063 pid=1078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:35.040000 audit: CWD cwd="/" Jul 2 07:53:35.040000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=1 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=2 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=3 name=(null) inode=13825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=4 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=5 name=(null) inode=13826 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=6 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=7 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=8 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=9 name=(null) inode=13828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=10 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=11 name=(null) inode=13829 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=12 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=13 name=(null) inode=13830 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=14 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=15 name=(null) inode=13831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=16 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=17 name=(null) inode=13832 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=18 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=19 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=20 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=21 name=(null) inode=13834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=22 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=23 name=(null) inode=13835 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=24 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=25 name=(null) inode=13836 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=26 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=27 name=(null) inode=13837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=28 name=(null) inode=13833 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=29 name=(null) inode=13838 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=30 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=31 name=(null) inode=13839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=32 name=(null) inode=13839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=33 name=(null) inode=13840 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=34 name=(null) inode=13839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=35 name=(null) inode=13841 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=36 name=(null) inode=13839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=37 name=(null) inode=13842 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=38 name=(null) inode=13839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=39 name=(null) inode=13843 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=40 name=(null) inode=13839 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=41 name=(null) inode=13844 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=42 name=(null) inode=13824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=43 name=(null) inode=13845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=44 name=(null) inode=13845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=45 name=(null) inode=13846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=46 name=(null) inode=13845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=47 name=(null) inode=13847 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=48 name=(null) inode=13845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=49 name=(null) inode=13848 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=50 name=(null) inode=13845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=51 name=(null) inode=13849 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=52 name=(null) inode=13845 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=53 name=(null) inode=13850 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=55 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=56 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=57 name=(null) inode=13852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=58 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=59 name=(null) inode=13853 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=60 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=61 name=(null) inode=13854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=62 name=(null) inode=13854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=63 name=(null) inode=13855 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=64 name=(null) inode=13854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=65 name=(null) inode=13856 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=66 name=(null) inode=13854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=67 name=(null) inode=13857 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=68 name=(null) inode=13854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=69 name=(null) inode=13858 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=70 name=(null) inode=13854 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=71 name=(null) inode=13859 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=72 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=73 name=(null) inode=13860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=74 name=(null) inode=13860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=75 name=(null) inode=13861 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=76 name=(null) inode=13860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=77 name=(null) inode=13862 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=78 name=(null) inode=13860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=79 name=(null) inode=13863 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=80 name=(null) inode=13860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=81 name=(null) inode=13864 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=82 name=(null) inode=13860 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=83 name=(null) inode=13865 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=84 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=85 name=(null) inode=13866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=86 name=(null) inode=13866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=87 name=(null) inode=13867 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=88 name=(null) inode=13866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=89 name=(null) inode=13868 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=90 name=(null) inode=13866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=91 name=(null) inode=13869 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=92 name=(null) inode=13866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=93 name=(null) inode=13870 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=94 name=(null) inode=13866 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=95 name=(null) inode=13871 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=96 name=(null) inode=13851 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=97 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=98 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=99 name=(null) inode=13873 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=100 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=101 name=(null) inode=13874 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=102 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=103 name=(null) inode=13875 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=104 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=105 name=(null) inode=13876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=106 name=(null) inode=13872 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PATH item=107 name=(null) inode=13877 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:53:35.040000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:53:35.066622 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jul 2 07:53:35.073616 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 07:53:35.075611 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:53:35.095204 systemd-networkd[1074]: lo: Link UP Jul 2 07:53:35.095217 systemd-networkd[1074]: lo: Gained carrier Jul 2 07:53:35.095671 systemd-networkd[1074]: Enumeration completed Jul 2 07:53:35.095793 systemd-networkd[1074]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:53:35.095804 systemd[1]: Started systemd-networkd.service. Jul 2 07:53:35.097041 systemd-networkd[1074]: eth0: Link UP Jul 2 07:53:35.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.097051 systemd-networkd[1074]: eth0: Gained carrier Jul 2 07:53:35.119895 systemd-networkd[1074]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:53:35.153763 kernel: kvm: Nested Virtualization enabled Jul 2 07:53:35.153806 kernel: SVM: kvm: Nested Paging enabled Jul 2 07:53:35.153820 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 2 07:53:35.154981 kernel: SVM: Virtual GIF supported Jul 2 07:53:35.170611 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:53:35.188990 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:53:35.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.191044 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:53:35.197665 lvm[1100]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:53:35.228282 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:53:35.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.229315 systemd[1]: Reached target cryptsetup.target. Jul 2 07:53:35.231220 systemd[1]: Starting lvm2-activation.service... Jul 2 07:53:35.233920 lvm[1102]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:53:35.265130 systemd[1]: Finished lvm2-activation.service. Jul 2 07:53:35.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.272177 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:53:35.273059 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:53:35.273081 systemd[1]: Reached target local-fs.target. Jul 2 07:53:35.273898 systemd[1]: Reached target machines.target. Jul 2 07:53:35.275578 systemd[1]: Starting ldconfig.service... Jul 2 07:53:35.276588 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.276707 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:35.277566 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:53:35.279362 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:53:35.281522 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:53:35.283380 systemd[1]: Starting systemd-sysext.service... Jul 2 07:53:35.284398 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1105 (bootctl) Jul 2 07:53:35.285284 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:53:35.287733 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:53:35.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.295933 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:53:35.299171 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:53:35.299387 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:53:35.310632 kernel: loop0: detected capacity change from 0 to 209816 Jul 2 07:53:35.316125 systemd-fsck[1113]: fsck.fat 4.2 (2021-01-31) Jul 2 07:53:35.316125 systemd-fsck[1113]: /dev/vda1: 790 files, 119261/258078 clusters Jul 2 07:53:35.317284 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:53:35.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.319906 systemd[1]: Mounting boot.mount... Jul 2 07:53:35.338189 systemd[1]: Mounted boot.mount. Jul 2 07:53:35.490646 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:53:35.491399 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:53:35.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.495023 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:53:35.496438 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:53:35.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.506611 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 07:53:35.510760 (sd-sysext)[1128]: Using extensions 'kubernetes'. Jul 2 07:53:35.511137 (sd-sysext)[1128]: Merged extensions into '/usr'. Jul 2 07:53:35.527681 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:35.529094 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:53:35.530011 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.531064 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:35.533181 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:35.535583 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:35.536570 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.536704 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:35.536801 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:35.539305 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:53:35.540499 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:35.540665 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:35.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.541973 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:35.542089 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:35.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.543720 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:35.543918 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:35.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.545225 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:53:35.545310 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.546474 systemd[1]: Finished systemd-sysext.service. Jul 2 07:53:35.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.548706 systemd[1]: Starting ensure-sysext.service... Jul 2 07:53:35.550881 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:53:35.552773 ldconfig[1104]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:53:35.555437 systemd[1]: Reloading. Jul 2 07:53:35.561621 systemd-tmpfiles[1143]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:53:35.562244 systemd-tmpfiles[1143]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:53:35.563543 systemd-tmpfiles[1143]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:53:35.606346 /usr/lib/systemd/system-generators/torcx-generator[1164]: time="2024-07-02T07:53:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:53:35.606374 /usr/lib/systemd/system-generators/torcx-generator[1164]: time="2024-07-02T07:53:35Z" level=info msg="torcx already run" Jul 2 07:53:35.676265 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:53:35.676281 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:53:35.694699 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:53:35.742328 systemd[1]: Finished ldconfig.service. Jul 2 07:53:35.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.744185 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:53:35.746999 systemd[1]: Starting audit-rules.service... Jul 2 07:53:35.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.748880 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:53:35.750741 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:53:35.753118 systemd[1]: Starting systemd-resolved.service... Jul 2 07:53:35.755081 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:53:35.756862 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:53:35.760974 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:53:35.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.763910 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:53:35.766104 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:35.766484 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.767793 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:35.767000 audit[1223]: SYSTEM_BOOT pid=1223 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.769713 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:35.771787 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:35.772611 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.772777 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:35.772883 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:53:35.772951 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:35.774044 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:53:35.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.776103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:35.776246 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:35.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.777829 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:35.777962 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:35.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.779157 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:35.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:35.779370 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:35.783306 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:35.783744 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.783000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:53:35.783000 audit[1241]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd2993cdc0 a2=420 a3=0 items=0 ppid=1212 pid=1241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:35.783000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:53:35.784737 augenrules[1241]: No rules Jul 2 07:53:35.785036 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:35.786849 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:35.788662 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:35.789430 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.789565 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:35.790921 systemd[1]: Starting systemd-update-done.service... Jul 2 07:53:35.791822 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:53:35.791950 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:35.793627 systemd[1]: Finished audit-rules.service. Jul 2 07:53:35.794950 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:35.795076 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:35.796412 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:35.796535 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:35.797898 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:35.798067 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:35.799328 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:53:35.799451 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.800740 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:53:35.804382 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:35.804582 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.805520 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:53:35.807188 systemd[1]: Starting modprobe@drm.service... Jul 2 07:53:35.810008 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:53:35.811757 systemd[1]: Starting modprobe@loop.service... Jul 2 07:53:35.812568 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.812673 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:35.819797 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:53:35.820821 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:53:35.820973 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:53:35.822640 systemd[1]: Finished systemd-update-done.service. Jul 2 07:53:35.823915 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:53:35.824080 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:53:35.825354 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:53:35.825515 systemd[1]: Finished modprobe@drm.service. Jul 2 07:53:35.826872 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:53:35.827005 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:53:35.828434 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:53:35.828577 systemd[1]: Finished modprobe@loop.service. Jul 2 07:53:35.830080 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:53:35.830169 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.831371 systemd[1]: Finished ensure-sysext.service. Jul 2 07:53:35.833893 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:53:35.835023 systemd-timesyncd[1222]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 07:53:35.835053 systemd[1]: Reached target time-set.target. Jul 2 07:53:35.835070 systemd-timesyncd[1222]: Initial clock synchronization to Tue 2024-07-02 07:53:36.234401 UTC. Jul 2 07:53:35.841298 systemd-resolved[1218]: Positive Trust Anchors: Jul 2 07:53:35.841310 systemd-resolved[1218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:53:35.841345 systemd-resolved[1218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:53:35.847681 systemd-resolved[1218]: Defaulting to hostname 'linux'. Jul 2 07:53:35.848968 systemd[1]: Started systemd-resolved.service. Jul 2 07:53:35.849850 systemd[1]: Reached target network.target. Jul 2 07:53:35.850641 systemd[1]: Reached target nss-lookup.target. Jul 2 07:53:35.851445 systemd[1]: Reached target sysinit.target. Jul 2 07:53:35.852275 systemd[1]: Started motdgen.path. Jul 2 07:53:35.852988 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:53:35.854180 systemd[1]: Started logrotate.timer. Jul 2 07:53:35.854988 systemd[1]: Started mdadm.timer. Jul 2 07:53:35.855664 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:53:35.856497 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:53:35.856519 systemd[1]: Reached target paths.target. Jul 2 07:53:35.857347 systemd[1]: Reached target timers.target. Jul 2 07:53:35.858348 systemd[1]: Listening on dbus.socket. Jul 2 07:53:35.860137 systemd[1]: Starting docker.socket... Jul 2 07:53:35.861688 systemd[1]: Listening on sshd.socket. Jul 2 07:53:35.862502 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:35.862750 systemd[1]: Listening on docker.socket. Jul 2 07:53:35.863502 systemd[1]: Reached target sockets.target. Jul 2 07:53:35.864278 systemd[1]: Reached target basic.target. Jul 2 07:53:35.865122 systemd[1]: System is tainted: cgroupsv1 Jul 2 07:53:35.865158 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.865175 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:53:35.866016 systemd[1]: Starting containerd.service... Jul 2 07:53:35.867944 systemd[1]: Starting dbus.service... Jul 2 07:53:35.869521 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:53:35.871345 systemd[1]: Starting extend-filesystems.service... Jul 2 07:53:35.872277 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:53:35.873334 systemd[1]: Starting motdgen.service... Jul 2 07:53:35.873915 jq[1276]: false Jul 2 07:53:35.875244 systemd[1]: Starting prepare-helm.service... Jul 2 07:53:35.877113 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:53:35.879480 systemd[1]: Starting sshd-keygen.service... Jul 2 07:53:35.882223 systemd[1]: Starting systemd-logind.service... Jul 2 07:53:35.883060 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:53:35.883113 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 07:53:35.884037 systemd[1]: Starting update-engine.service... Jul 2 07:53:35.885606 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:53:35.888435 jq[1292]: true Jul 2 07:53:35.888951 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:53:35.889153 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:53:35.889929 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:53:35.894035 extend-filesystems[1277]: Found loop1 Jul 2 07:53:35.894035 extend-filesystems[1277]: Found sr0 Jul 2 07:53:35.894035 extend-filesystems[1277]: Found vda Jul 2 07:53:35.894035 extend-filesystems[1277]: Found vda1 Jul 2 07:53:35.894035 extend-filesystems[1277]: Found vda2 Jul 2 07:53:35.894035 extend-filesystems[1277]: Found vda3 Jul 2 07:53:35.894035 extend-filesystems[1277]: Found usr Jul 2 07:53:35.894035 extend-filesystems[1277]: Found vda4 Jul 2 07:53:35.894035 extend-filesystems[1277]: Found vda6 Jul 2 07:53:35.894035 extend-filesystems[1277]: Found vda7 Jul 2 07:53:35.894035 extend-filesystems[1277]: Found vda9 Jul 2 07:53:35.894035 extend-filesystems[1277]: Checking size of /dev/vda9 Jul 2 07:53:35.890118 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:53:35.931060 tar[1298]: linux-amd64/helm Jul 2 07:53:35.905264 dbus-daemon[1275]: [system] SELinux support is enabled Jul 2 07:53:35.931402 extend-filesystems[1277]: Resized partition /dev/vda9 Jul 2 07:53:35.933873 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 07:53:35.905425 systemd[1]: Started dbus.service. Jul 2 07:53:35.935041 jq[1301]: true Jul 2 07:53:35.935232 extend-filesystems[1329]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 07:53:35.908268 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:53:35.908473 systemd[1]: Finished motdgen.service. Jul 2 07:53:35.909409 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:53:35.909423 systemd[1]: Reached target system-config.target. Jul 2 07:53:35.918854 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:53:35.918867 systemd[1]: Reached target user-config.target. Jul 2 07:53:35.947978 update_engine[1290]: I0702 07:53:35.947840 1290 main.cc:92] Flatcar Update Engine starting Jul 2 07:53:35.956102 env[1302]: time="2024-07-02T07:53:35.956053319Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:53:35.962688 systemd[1]: Started update-engine.service. Jul 2 07:53:35.963277 update_engine[1290]: I0702 07:53:35.963248 1290 update_check_scheduler.cc:74] Next update check in 10m54s Jul 2 07:53:35.965240 systemd[1]: Started locksmithd.service. Jul 2 07:53:35.990631 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 07:53:35.990680 env[1302]: time="2024-07-02T07:53:35.977749587Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:53:35.990680 env[1302]: time="2024-07-02T07:53:35.990578866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:53:35.990769 bash[1333]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:53:35.968780 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:53:35.990819 systemd-logind[1289]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:53:35.990834 systemd-logind[1289]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:53:35.992584 env[1302]: time="2024-07-02T07:53:35.992449525Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:53:35.992584 env[1302]: time="2024-07-02T07:53:35.992474902Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:53:35.992741 env[1302]: time="2024-07-02T07:53:35.992715303Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:53:35.992741 env[1302]: time="2024-07-02T07:53:35.992736443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:53:35.992880 env[1302]: time="2024-07-02T07:53:35.992748646Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:53:35.992880 env[1302]: time="2024-07-02T07:53:35.992757703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:53:35.992880 env[1302]: time="2024-07-02T07:53:35.992816062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:53:35.993013 env[1302]: time="2024-07-02T07:53:35.992992563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:53:35.993161 env[1302]: time="2024-07-02T07:53:35.993138567Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:53:35.993161 env[1302]: time="2024-07-02T07:53:35.993157342Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:53:35.993338 env[1302]: time="2024-07-02T07:53:35.993199902Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:53:35.993338 env[1302]: time="2024-07-02T07:53:35.993210882Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:53:35.994624 extend-filesystems[1329]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 07:53:35.994624 extend-filesystems[1329]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 07:53:35.994624 extend-filesystems[1329]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 07:53:36.004787 extend-filesystems[1277]: Resized filesystem in /dev/vda9 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997157914Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997180607Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997192549Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997219791Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997250067Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997263523Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997276687Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997288770Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997300963Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997322814Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997334065Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997345336Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997419986Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:53:36.008309 env[1302]: time="2024-07-02T07:53:35.997480860Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:53:35.995509 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.997790220Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.997813875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.997824765Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.997868728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.997880680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.997891360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.997900798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.997911618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.997923039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.997933058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.997942596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.997954749Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.998068753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.998081847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:53:36.008732 env[1302]: time="2024-07-02T07:53:35.998093269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:53:35.995744 systemd[1]: Finished extend-filesystems.service. Jul 2 07:53:36.009065 env[1302]: time="2024-07-02T07:53:35.998102746Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:53:36.009065 env[1302]: time="2024-07-02T07:53:35.998114699Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:53:36.009065 env[1302]: time="2024-07-02T07:53:35.998124066Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:53:36.009065 env[1302]: time="2024-07-02T07:53:35.998140457Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:53:36.009065 env[1302]: time="2024-07-02T07:53:35.998170974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:53:35.999047 systemd-logind[1289]: New seat seat0. Jul 2 07:53:36.001508 systemd[1]: Started containerd.service. Jul 2 07:53:36.009233 env[1302]: time="2024-07-02T07:53:35.998341153Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:53:36.009233 env[1302]: time="2024-07-02T07:53:35.998385477Z" level=info msg="Connect containerd service" Jul 2 07:53:36.009233 env[1302]: time="2024-07-02T07:53:35.998413319Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:53:36.009233 env[1302]: time="2024-07-02T07:53:35.998884933Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:53:36.009233 env[1302]: time="2024-07-02T07:53:35.999083496Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:53:36.009233 env[1302]: time="2024-07-02T07:53:36.000124568Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:53:36.009233 env[1302]: time="2024-07-02T07:53:36.000160235Z" level=info msg="containerd successfully booted in 0.056623s" Jul 2 07:53:36.009233 env[1302]: time="2024-07-02T07:53:35.999737973Z" level=info msg="Start subscribing containerd event" Jul 2 07:53:36.009233 env[1302]: time="2024-07-02T07:53:36.000857613Z" level=info msg="Start recovering state" Jul 2 07:53:36.009233 env[1302]: time="2024-07-02T07:53:36.000917519Z" level=info msg="Start event monitor" Jul 2 07:53:36.009233 env[1302]: time="2024-07-02T07:53:36.000942565Z" level=info msg="Start snapshots syncer" Jul 2 07:53:36.009233 env[1302]: time="2024-07-02T07:53:36.000957365Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:53:36.009233 env[1302]: time="2024-07-02T07:53:36.000965907Z" level=info msg="Start streaming server" Jul 2 07:53:36.007052 locksmithd[1336]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:53:36.013066 systemd[1]: Started systemd-logind.service. Jul 2 07:53:36.318637 tar[1298]: linux-amd64/LICENSE Jul 2 07:53:36.318751 tar[1298]: linux-amd64/README.md Jul 2 07:53:36.322968 systemd[1]: Finished prepare-helm.service. Jul 2 07:53:36.712287 systemd-networkd[1074]: eth0: Gained IPv6LL Jul 2 07:53:36.713916 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:53:36.715353 systemd[1]: Reached target network-online.target. Jul 2 07:53:36.717698 systemd[1]: Starting kubelet.service... Jul 2 07:53:37.194836 sshd_keygen[1307]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:53:37.212958 systemd[1]: Finished sshd-keygen.service. Jul 2 07:53:37.215305 systemd[1]: Starting issuegen.service... Jul 2 07:53:37.219349 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:53:37.219510 systemd[1]: Finished issuegen.service. Jul 2 07:53:37.221407 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:53:37.226604 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:53:37.228611 systemd[1]: Started getty@tty1.service. Jul 2 07:53:37.230383 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:53:37.231461 systemd[1]: Reached target getty.target. Jul 2 07:53:37.250598 systemd[1]: Started kubelet.service. Jul 2 07:53:37.251790 systemd[1]: Reached target multi-user.target. Jul 2 07:53:37.253864 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:53:37.259189 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:53:37.259372 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:53:37.261287 systemd[1]: Startup finished in 4.734s (kernel) + 5.415s (userspace) = 10.149s. Jul 2 07:53:37.720402 kubelet[1376]: E0702 07:53:37.720331 1376 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:53:37.722427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:53:37.722582 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:53:46.119290 systemd[1]: Created slice system-sshd.slice. Jul 2 07:53:46.120295 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:58118.service. Jul 2 07:53:46.164383 sshd[1387]: Accepted publickey for core from 10.0.0.1 port 58118 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:53:46.165737 sshd[1387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:46.174456 systemd[1]: Created slice user-500.slice. Jul 2 07:53:46.175622 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:53:46.177521 systemd-logind[1289]: New session 1 of user core. Jul 2 07:53:46.184231 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:53:46.185372 systemd[1]: Starting user@500.service... Jul 2 07:53:46.188275 (systemd)[1392]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:46.253927 systemd[1392]: Queued start job for default target default.target. Jul 2 07:53:46.254133 systemd[1392]: Reached target paths.target. Jul 2 07:53:46.254148 systemd[1392]: Reached target sockets.target. Jul 2 07:53:46.254170 systemd[1392]: Reached target timers.target. Jul 2 07:53:46.254181 systemd[1392]: Reached target basic.target. Jul 2 07:53:46.254220 systemd[1392]: Reached target default.target. Jul 2 07:53:46.254241 systemd[1392]: Startup finished in 61ms. Jul 2 07:53:46.254455 systemd[1]: Started user@500.service. Jul 2 07:53:46.255537 systemd[1]: Started session-1.scope. Jul 2 07:53:46.306686 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:58130.service. Jul 2 07:53:46.355575 sshd[1401]: Accepted publickey for core from 10.0.0.1 port 58130 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:53:46.357074 sshd[1401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:46.361002 systemd-logind[1289]: New session 2 of user core. Jul 2 07:53:46.361845 systemd[1]: Started session-2.scope. Jul 2 07:53:46.415480 sshd[1401]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:46.417828 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:58136.service. Jul 2 07:53:46.418213 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:58130.service: Deactivated successfully. Jul 2 07:53:46.419032 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:53:46.419083 systemd-logind[1289]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:53:46.420171 systemd-logind[1289]: Removed session 2. Jul 2 07:53:46.458660 sshd[1407]: Accepted publickey for core from 10.0.0.1 port 58136 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:53:46.459789 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:46.463244 systemd-logind[1289]: New session 3 of user core. Jul 2 07:53:46.463979 systemd[1]: Started session-3.scope. Jul 2 07:53:46.514082 sshd[1407]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:46.516269 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:58138.service. Jul 2 07:53:46.516853 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:58136.service: Deactivated successfully. Jul 2 07:53:46.517664 systemd-logind[1289]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:53:46.517724 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:53:46.518476 systemd-logind[1289]: Removed session 3. Jul 2 07:53:46.559253 sshd[1413]: Accepted publickey for core from 10.0.0.1 port 58138 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:53:46.560576 sshd[1413]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:46.564200 systemd-logind[1289]: New session 4 of user core. Jul 2 07:53:46.564892 systemd[1]: Started session-4.scope. Jul 2 07:53:46.621974 sshd[1413]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:46.624568 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:58152.service. Jul 2 07:53:46.625085 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:58138.service: Deactivated successfully. Jul 2 07:53:46.625862 systemd-logind[1289]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:53:46.625888 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:53:46.626854 systemd-logind[1289]: Removed session 4. Jul 2 07:53:46.664816 sshd[1421]: Accepted publickey for core from 10.0.0.1 port 58152 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:53:46.665879 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:46.669056 systemd-logind[1289]: New session 5 of user core. Jul 2 07:53:46.669737 systemd[1]: Started session-5.scope. Jul 2 07:53:46.724267 sudo[1426]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 07:53:46.724465 sudo[1426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:53:46.733840 dbus-daemon[1275]: \xd0Md\u0006\x91U: received setenforce notice (enforcing=1875971488) Jul 2 07:53:46.735879 sudo[1426]: pam_unix(sudo:session): session closed for user root Jul 2 07:53:46.737418 sshd[1421]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:46.739656 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:58166.service. Jul 2 07:53:46.740379 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:58152.service: Deactivated successfully. Jul 2 07:53:46.741105 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:53:46.741134 systemd-logind[1289]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:53:46.741840 systemd-logind[1289]: Removed session 5. Jul 2 07:53:46.779944 sshd[1428]: Accepted publickey for core from 10.0.0.1 port 58166 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:53:46.780990 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:46.783754 systemd-logind[1289]: New session 6 of user core. Jul 2 07:53:46.784329 systemd[1]: Started session-6.scope. Jul 2 07:53:46.836487 sudo[1435]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 07:53:46.836686 sudo[1435]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:53:46.838863 sudo[1435]: pam_unix(sudo:session): session closed for user root Jul 2 07:53:46.842943 sudo[1434]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 07:53:46.843122 sudo[1434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:53:46.850406 systemd[1]: Stopping audit-rules.service... Jul 2 07:53:46.849000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 07:53:46.851502 auditctl[1438]: No rules Jul 2 07:53:46.851810 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 07:53:46.851977 systemd[1]: Stopped audit-rules.service. Jul 2 07:53:46.852388 kernel: kauditd_printk_skb: 163 callbacks suppressed Jul 2 07:53:46.852428 kernel: audit: type=1305 audit(1719906826.849:145): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 07:53:46.853110 systemd[1]: Starting audit-rules.service... Jul 2 07:53:46.849000 audit[1438]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc19859710 a2=420 a3=0 items=0 ppid=1 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:46.858734 kernel: audit: type=1300 audit(1719906826.849:145): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc19859710 a2=420 a3=0 items=0 ppid=1 pid=1438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:46.849000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 2 07:53:46.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.863171 kernel: audit: type=1327 audit(1719906826.849:145): proctitle=2F7362696E2F617564697463746C002D44 Jul 2 07:53:46.863205 kernel: audit: type=1131 audit(1719906826.850:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.868902 augenrules[1456]: No rules Jul 2 07:53:46.869578 systemd[1]: Finished audit-rules.service. Jul 2 07:53:46.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.870520 sudo[1434]: pam_unix(sudo:session): session closed for user root Jul 2 07:53:46.871737 sshd[1428]: pam_unix(sshd:session): session closed for user core Jul 2 07:53:46.869000 audit[1434]: USER_END pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.874283 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:58170.service. Jul 2 07:53:46.874978 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:58166.service: Deactivated successfully. Jul 2 07:53:46.875577 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:53:46.876326 systemd-logind[1289]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:53:46.876626 kernel: audit: type=1130 audit(1719906826.869:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.876682 kernel: audit: type=1106 audit(1719906826.869:148): pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.876704 kernel: audit: type=1104 audit(1719906826.869:149): pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.869000 audit[1434]: CRED_DISP pid=1434 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.877197 systemd-logind[1289]: Removed session 6. Jul 2 07:53:46.872000 audit[1428]: USER_END pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:53:46.884230 kernel: audit: type=1106 audit(1719906826.872:150): pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:53:46.884263 kernel: audit: type=1104 audit(1719906826.872:151): pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:53:46.872000 audit[1428]: CRED_DISP pid=1428 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:53:46.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.138:22-10.0.0.1:58170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.890994 kernel: audit: type=1130 audit(1719906826.872:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.138:22-10.0.0.1:58170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.138:22-10.0.0.1:58166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.918000 audit[1461]: USER_ACCT pid=1461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:53:46.920145 sshd[1461]: Accepted publickey for core from 10.0.0.1 port 58170 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:53:46.919000 audit[1461]: CRED_ACQ pid=1461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:53:46.919000 audit[1461]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdef9c6120 a2=3 a3=0 items=0 ppid=1 pid=1461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:46.919000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:53:46.921419 sshd[1461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:53:46.924333 systemd-logind[1289]: New session 7 of user core. Jul 2 07:53:46.924999 systemd[1]: Started session-7.scope. Jul 2 07:53:46.926000 audit[1461]: USER_START pid=1461 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:53:46.927000 audit[1466]: CRED_ACQ pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:53:46.974000 audit[1467]: USER_ACCT pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.975632 sudo[1467]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:53:46.975000 audit[1467]: CRED_REFR pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.975807 sudo[1467]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:53:46.976000 audit[1467]: USER_START pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:53:46.993587 systemd[1]: Starting docker.service... Jul 2 07:53:47.026428 env[1479]: time="2024-07-02T07:53:47.026378034Z" level=info msg="Starting up" Jul 2 07:53:47.027713 env[1479]: time="2024-07-02T07:53:47.027690055Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:53:47.027713 env[1479]: time="2024-07-02T07:53:47.027706330Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:53:47.027776 env[1479]: time="2024-07-02T07:53:47.027724500Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:53:47.027776 env[1479]: time="2024-07-02T07:53:47.027733276Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:53:47.029126 env[1479]: time="2024-07-02T07:53:47.029109353Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:53:47.029126 env[1479]: time="2024-07-02T07:53:47.029124301Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:53:47.029213 env[1479]: time="2024-07-02T07:53:47.029144771Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:53:47.029213 env[1479]: time="2024-07-02T07:53:47.029154338Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:53:47.687213 env[1479]: time="2024-07-02T07:53:47.687170150Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 2 07:53:47.687213 env[1479]: time="2024-07-02T07:53:47.687201971Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 2 07:53:47.687484 env[1479]: time="2024-07-02T07:53:47.687442966Z" level=info msg="Loading containers: start." Jul 2 07:53:47.734000 audit[1514]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.734000 audit[1514]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff30003850 a2=0 a3=7fff3000383c items=0 ppid=1479 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.734000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 2 07:53:47.735000 audit[1516]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.735000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc0e0244b0 a2=0 a3=7ffc0e02449c items=0 ppid=1479 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.735000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 2 07:53:47.736000 audit[1518]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.736000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe666dec10 a2=0 a3=7ffe666debfc items=0 ppid=1479 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.736000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 07:53:47.738000 audit[1520]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.738000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fffac0f6d00 a2=0 a3=7fffac0f6cec items=0 ppid=1479 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.738000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 07:53:47.739000 audit[1522]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.739000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe7661d750 a2=0 a3=7ffe7661d73c items=0 ppid=1479 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.739000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 2 07:53:47.755000 audit[1527]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.755000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd25983230 a2=0 a3=7ffd2598321c items=0 ppid=1479 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.755000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 2 07:53:47.797000 audit[1529]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.797000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffce4ceb820 a2=0 a3=7ffce4ceb80c items=0 ppid=1479 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.797000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 2 07:53:47.799000 audit[1531]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.799000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe69608ba0 a2=0 a3=7ffe69608b8c items=0 ppid=1479 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.799000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 2 07:53:47.800000 audit[1533]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.800000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe1e10d050 a2=0 a3=7ffe1e10d03c items=0 ppid=1479 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.800000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:53:47.809000 audit[1537]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.809000 audit[1537]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe53e79ba0 a2=0 a3=7ffe53e79b8c items=0 ppid=1479 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.809000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:53:47.809000 audit[1538]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.809000 audit[1538]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe8767b030 a2=0 a3=7ffe8767b01c items=0 ppid=1479 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.809000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:53:47.819638 kernel: Initializing XFRM netlink socket Jul 2 07:53:47.846870 env[1479]: time="2024-07-02T07:53:47.846833429Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 07:53:47.854310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:53:47.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:47.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:47.854470 systemd[1]: Stopped kubelet.service. Jul 2 07:53:47.855909 systemd[1]: Starting kubelet.service... Jul 2 07:53:47.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:47.930115 systemd[1]: Started kubelet.service. Jul 2 07:53:47.862000 audit[1549]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.862000 audit[1549]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffff9264110 a2=0 a3=7ffff92640fc items=0 ppid=1479 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.862000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 2 07:53:47.937000 audit[1563]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.937000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd197e31e0 a2=0 a3=7ffd197e31cc items=0 ppid=1479 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.937000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 2 07:53:47.940000 audit[1566]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.940000 audit[1566]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffda62537f0 a2=0 a3=7ffda62537dc items=0 ppid=1479 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.940000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 2 07:53:47.944000 audit[1569]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.944000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff01521ec0 a2=0 a3=7fff01521eac items=0 ppid=1479 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.944000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 2 07:53:47.946000 audit[1571]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.946000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc5e7ddce0 a2=0 a3=7ffc5e7ddccc items=0 ppid=1479 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.946000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 2 07:53:47.948000 audit[1573]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.948000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff5630d190 a2=0 a3=7fff5630d17c items=0 ppid=1479 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.948000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 2 07:53:47.950000 audit[1576]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.950000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffefff8b6c0 a2=0 a3=7ffefff8b6ac items=0 ppid=1479 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.950000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 2 07:53:47.957000 audit[1579]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.957000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe1ea93630 a2=0 a3=7ffe1ea9361c items=0 ppid=1479 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.957000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 2 07:53:47.959000 audit[1581]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.959000 audit[1581]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff34214c90 a2=0 a3=7fff34214c7c items=0 ppid=1479 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.959000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 07:53:47.962000 audit[1583]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.962000 audit[1583]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe9b07d8b0 a2=0 a3=7ffe9b07d89c items=0 ppid=1479 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.962000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 07:53:47.964000 audit[1585]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:47.964000 audit[1585]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc40cb7ec0 a2=0 a3=7ffc40cb7eac items=0 ppid=1479 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:47.964000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 2 07:53:47.966327 systemd-networkd[1074]: docker0: Link UP Jul 2 07:53:48.239379 kubelet[1556]: E0702 07:53:48.239249 1556 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:53:48.242726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:53:48.242888 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:53:48.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 07:53:48.244000 audit[1590]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:48.244000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcebe62c20 a2=0 a3=7ffcebe62c0c items=0 ppid=1479 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:48.244000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:53:48.245000 audit[1591]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:53:48.245000 audit[1591]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdd8c473f0 a2=0 a3=7ffdd8c473dc items=0 ppid=1479 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:53:48.245000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:53:48.247698 env[1479]: time="2024-07-02T07:53:48.247660399Z" level=info msg="Loading containers: done." Jul 2 07:53:48.264986 env[1479]: time="2024-07-02T07:53:48.264935635Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:53:48.265140 env[1479]: time="2024-07-02T07:53:48.265098417Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 07:53:48.265199 env[1479]: time="2024-07-02T07:53:48.265172584Z" level=info msg="Daemon has completed initialization" Jul 2 07:53:48.282902 systemd[1]: Started docker.service. Jul 2 07:53:48.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:48.289262 env[1479]: time="2024-07-02T07:53:48.289188902Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:53:48.922505 env[1302]: time="2024-07-02T07:53:48.922463248Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 07:53:49.589000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088883871.mount: Deactivated successfully. Jul 2 07:53:51.876752 env[1302]: time="2024-07-02T07:53:51.876679812Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:51.879013 env[1302]: time="2024-07-02T07:53:51.878965153Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:51.880691 env[1302]: time="2024-07-02T07:53:51.880652750Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:51.882698 env[1302]: time="2024-07-02T07:53:51.882656421Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:51.883458 env[1302]: time="2024-07-02T07:53:51.883420730Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 07:53:51.892198 env[1302]: time="2024-07-02T07:53:51.892154657Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 07:53:54.675612 env[1302]: time="2024-07-02T07:53:54.675538560Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:54.723786 env[1302]: time="2024-07-02T07:53:54.723742227Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:54.777205 env[1302]: time="2024-07-02T07:53:54.777160063Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:54.805629 env[1302]: time="2024-07-02T07:53:54.805580408Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:54.806234 env[1302]: time="2024-07-02T07:53:54.806209549Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 07:53:54.815268 env[1302]: time="2024-07-02T07:53:54.815231861Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 07:53:56.569587 env[1302]: time="2024-07-02T07:53:56.569524524Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:56.571277 env[1302]: time="2024-07-02T07:53:56.571243807Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:56.573083 env[1302]: time="2024-07-02T07:53:56.573054759Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:56.574565 env[1302]: time="2024-07-02T07:53:56.574535575Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:56.575154 env[1302]: time="2024-07-02T07:53:56.575124715Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 07:53:56.582756 env[1302]: time="2024-07-02T07:53:56.582722036Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 07:53:57.811775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2584317312.mount: Deactivated successfully. Jul 2 07:53:58.493787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:53:58.493975 systemd[1]: Stopped kubelet.service. Jul 2 07:53:58.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:58.494982 kernel: kauditd_printk_skb: 88 callbacks suppressed Jul 2 07:53:58.495037 kernel: audit: type=1130 audit(1719906838.493:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:58.495255 systemd[1]: Starting kubelet.service... Jul 2 07:53:58.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:58.518893 kernel: audit: type=1131 audit(1719906838.493:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:58.563589 systemd[1]: Started kubelet.service. Jul 2 07:53:58.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:58.567619 kernel: audit: type=1130 audit(1719906838.563:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:53:58.711263 kubelet[1663]: E0702 07:53:58.711202 1663 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:53:58.713264 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:53:58.713394 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:53:58.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 07:53:58.717610 kernel: audit: type=1131 audit(1719906838.712:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 07:53:58.993402 env[1302]: time="2024-07-02T07:53:58.993272208Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:58.995466 env[1302]: time="2024-07-02T07:53:58.995426672Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:58.996994 env[1302]: time="2024-07-02T07:53:58.996952131Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:58.998217 env[1302]: time="2024-07-02T07:53:58.998183774Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:58.998578 env[1302]: time="2024-07-02T07:53:58.998545837Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 07:53:59.009138 env[1302]: time="2024-07-02T07:53:59.009093987Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:53:59.515453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount897936677.mount: Deactivated successfully. Jul 2 07:53:59.519909 env[1302]: time="2024-07-02T07:53:59.519860701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:59.521637 env[1302]: time="2024-07-02T07:53:59.521606961Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:59.523084 env[1302]: time="2024-07-02T07:53:59.523058198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:59.524460 env[1302]: time="2024-07-02T07:53:59.524421950Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:53:59.524847 env[1302]: time="2024-07-02T07:53:59.524818196Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:53:59.533009 env[1302]: time="2024-07-02T07:53:59.532975369Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 07:54:00.074168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2786635746.mount: Deactivated successfully. Jul 2 07:54:02.431768 env[1302]: time="2024-07-02T07:54:02.431708203Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:02.433928 env[1302]: time="2024-07-02T07:54:02.433889594Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:02.435670 env[1302]: time="2024-07-02T07:54:02.435634032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:02.437428 env[1302]: time="2024-07-02T07:54:02.437395338Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:02.438098 env[1302]: time="2024-07-02T07:54:02.438038066Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 07:54:02.446127 env[1302]: time="2024-07-02T07:54:02.446106658Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 07:54:03.012473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1577483629.mount: Deactivated successfully. Jul 2 07:54:03.966438 env[1302]: time="2024-07-02T07:54:03.966375518Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:03.968482 env[1302]: time="2024-07-02T07:54:03.968453355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:03.970046 env[1302]: time="2024-07-02T07:54:03.970016772Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:03.971503 env[1302]: time="2024-07-02T07:54:03.971447429Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:03.971842 env[1302]: time="2024-07-02T07:54:03.971814946Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 07:54:06.288533 systemd[1]: Stopped kubelet.service. Jul 2 07:54:06.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:06.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:06.291940 systemd[1]: Starting kubelet.service... Jul 2 07:54:06.294840 kernel: audit: type=1130 audit(1719906846.288:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:06.294958 kernel: audit: type=1131 audit(1719906846.289:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:06.305741 systemd[1]: Reloading. Jul 2 07:54:06.365583 /usr/lib/systemd/system-generators/torcx-generator[1795]: time="2024-07-02T07:54:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:54:06.365934 /usr/lib/systemd/system-generators/torcx-generator[1795]: time="2024-07-02T07:54:06Z" level=info msg="torcx already run" Jul 2 07:54:06.590843 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:54:06.590858 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:54:06.608960 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:54:06.673147 systemd[1]: Started kubelet.service. Jul 2 07:54:06.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:06.677645 kernel: audit: type=1130 audit(1719906846.673:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:06.677799 systemd[1]: Stopping kubelet.service... Jul 2 07:54:06.678604 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:54:06.678830 systemd[1]: Stopped kubelet.service. Jul 2 07:54:06.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:06.680288 systemd[1]: Starting kubelet.service... Jul 2 07:54:06.682645 kernel: audit: type=1131 audit(1719906846.678:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:06.752671 systemd[1]: Started kubelet.service. Jul 2 07:54:06.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:06.756726 kernel: audit: type=1130 audit(1719906846.751:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:06.799323 kubelet[1863]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:54:06.799323 kubelet[1863]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:54:06.799323 kubelet[1863]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:54:06.799617 kubelet[1863]: I0702 07:54:06.799367 1863 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:54:07.053678 kubelet[1863]: I0702 07:54:07.053638 1863 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:54:07.053678 kubelet[1863]: I0702 07:54:07.053678 1863 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:54:07.053982 kubelet[1863]: I0702 07:54:07.053959 1863 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:54:07.077157 kubelet[1863]: I0702 07:54:07.077132 1863 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:54:07.081392 kubelet[1863]: E0702 07:54:07.081347 1863 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:07.088981 kubelet[1863]: I0702 07:54:07.088966 1863 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:54:07.089251 kubelet[1863]: I0702 07:54:07.089231 1863 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:54:07.089414 kubelet[1863]: I0702 07:54:07.089401 1863 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:54:07.089733 kubelet[1863]: I0702 07:54:07.089721 1863 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:54:07.089767 kubelet[1863]: I0702 07:54:07.089735 1863 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:54:07.090227 kubelet[1863]: I0702 07:54:07.090208 1863 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:54:07.091913 kubelet[1863]: I0702 07:54:07.091893 1863 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:54:07.091913 kubelet[1863]: I0702 07:54:07.091914 1863 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:54:07.091968 kubelet[1863]: I0702 07:54:07.091937 1863 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:54:07.091968 kubelet[1863]: I0702 07:54:07.091951 1863 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:54:07.092419 kubelet[1863]: W0702 07:54:07.092383 1863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:07.092546 kubelet[1863]: E0702 07:54:07.092493 1863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:07.092546 kubelet[1863]: W0702 07:54:07.092419 1863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:07.092546 kubelet[1863]: E0702 07:54:07.092533 1863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:07.092887 kubelet[1863]: I0702 07:54:07.092862 1863 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:54:07.096283 kubelet[1863]: W0702 07:54:07.096253 1863 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:54:07.096706 kubelet[1863]: I0702 07:54:07.096685 1863 server.go:1232] "Started kubelet" Jul 2 07:54:07.096842 kubelet[1863]: I0702 07:54:07.096811 1863 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:54:07.096900 kubelet[1863]: I0702 07:54:07.096865 1863 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:54:07.097096 kubelet[1863]: I0702 07:54:07.097075 1863 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:54:07.097532 kubelet[1863]: I0702 07:54:07.097510 1863 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:54:07.096000 audit[1863]: AVC avc: denied { mac_admin } for pid=1863 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:07.096000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 2 07:54:07.102137 kubelet[1863]: I0702 07:54:07.098087 1863 kubelet.go:1386] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 2 07:54:07.102137 kubelet[1863]: I0702 07:54:07.098112 1863 kubelet.go:1390] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 2 07:54:07.102137 kubelet[1863]: I0702 07:54:07.098152 1863 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:54:07.102137 kubelet[1863]: E0702 07:54:07.098408 1863 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:54:07.102137 kubelet[1863]: E0702 07:54:07.098425 1863 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:54:07.102262 kubelet[1863]: E0702 07:54:07.099432 1863 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de562a3bf56f47", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 54, 7, 96663879, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 54, 7, 96663879, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.138:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.138:6443: connect: connection refused'(may retry after sleeping) Jul 2 07:54:07.102262 kubelet[1863]: E0702 07:54:07.099558 1863 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:54:07.102262 kubelet[1863]: I0702 07:54:07.099580 1863 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:54:07.102262 kubelet[1863]: I0702 07:54:07.099785 1863 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:54:07.102262 kubelet[1863]: I0702 07:54:07.099969 1863 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:54:07.102415 kubelet[1863]: W0702 07:54:07.100160 1863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:07.102415 kubelet[1863]: E0702 07:54:07.100190 1863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:07.102415 kubelet[1863]: E0702 07:54:07.100682 1863 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" Jul 2 07:54:07.111007 kernel: audit: type=1400 audit(1719906847.096:200): avc: denied { mac_admin } for pid=1863 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:07.111101 kernel: audit: type=1401 audit(1719906847.096:200): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 2 07:54:07.111122 kernel: audit: type=1300 audit(1719906847.096:200): arch=c000003e syscall=188 success=no exit=-22 a0=c00039bc80 a1=c00078ae88 a2=c00039bc50 a3=25 items=0 ppid=1 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.111140 kernel: audit: type=1327 audit(1719906847.096:200): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 2 07:54:07.096000 audit[1863]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00039bc80 a1=c00078ae88 a2=c00039bc50 a3=25 items=0 ppid=1 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.096000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 2 07:54:07.096000 audit[1863]: AVC avc: denied { mac_admin } for pid=1863 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:07.114357 kernel: audit: type=1400 audit(1719906847.096:201): avc: denied { mac_admin } for pid=1863 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:07.096000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 2 07:54:07.096000 audit[1863]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0007dffc0 a1=c00078aea0 a2=c00039bd10 a3=25 items=0 ppid=1 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.096000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 2 07:54:07.098000 audit[1875]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:07.098000 audit[1875]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdd76aaf70 a2=0 a3=7ffdd76aaf5c items=0 ppid=1863 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.098000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 07:54:07.099000 audit[1876]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:07.099000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfd757810 a2=0 a3=7ffdfd7577fc items=0 ppid=1863 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.099000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 07:54:07.102000 audit[1878]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:07.102000 audit[1878]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe3853eb20 a2=0 a3=7ffe3853eb0c items=0 ppid=1863 pid=1878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.102000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 07:54:07.108000 audit[1881]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:07.108000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcbf7e11f0 a2=0 a3=7ffcbf7e11dc items=0 ppid=1863 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.108000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 07:54:07.116000 audit[1885]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:07.116000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffef155b3b0 a2=0 a3=7ffef155b39c items=0 ppid=1863 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.116000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 2 07:54:07.118880 kubelet[1863]: I0702 07:54:07.118360 1863 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:54:07.117000 audit[1886]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:07.117000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff9f046030 a2=0 a3=7fff9f04601c items=0 ppid=1863 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.117000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 07:54:07.119165 kubelet[1863]: I0702 07:54:07.119148 1863 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:54:07.119194 kubelet[1863]: I0702 07:54:07.119173 1863 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:54:07.119215 kubelet[1863]: I0702 07:54:07.119196 1863 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:54:07.119259 kubelet[1863]: E0702 07:54:07.119250 1863 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:54:07.119854 kubelet[1863]: W0702 07:54:07.119699 1863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:07.119854 kubelet[1863]: E0702 07:54:07.119736 1863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:07.118000 audit[1887]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:07.118000 audit[1887]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc93d497e0 a2=0 a3=7ffc93d497cc items=0 ppid=1863 pid=1887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 07:54:07.119000 audit[1889]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1889 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:07.119000 audit[1889]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3a55c7f0 a2=0 a3=7ffd3a55c7dc items=0 ppid=1863 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.119000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 07:54:07.119000 audit[1890]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:07.119000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcba4f0680 a2=0 a3=7ffcba4f066c items=0 ppid=1863 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.119000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 07:54:07.120000 audit[1891]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1891 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:07.120000 audit[1891]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff81eb7b90 a2=0 a3=7fff81eb7b7c items=0 ppid=1863 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.120000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 07:54:07.121000 audit[1892]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:07.121000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1c554fc0 a2=0 a3=7ffc1c554fac items=0 ppid=1863 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.121000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 07:54:07.121000 audit[1893]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:07.121000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff8b8fb3c0 a2=0 a3=7fff8b8fb3ac items=0 ppid=1863 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.121000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 07:54:07.137222 kubelet[1863]: I0702 07:54:07.137207 1863 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:54:07.137222 kubelet[1863]: I0702 07:54:07.137219 1863 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:54:07.137307 kubelet[1863]: I0702 07:54:07.137230 1863 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:54:07.201384 kubelet[1863]: I0702 07:54:07.201351 1863 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:54:07.201752 kubelet[1863]: E0702 07:54:07.201722 1863 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jul 2 07:54:07.219865 kubelet[1863]: E0702 07:54:07.219844 1863 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:54:07.301545 kubelet[1863]: E0702 07:54:07.301506 1863 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" Jul 2 07:54:07.402679 kubelet[1863]: I0702 07:54:07.402551 1863 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:54:07.402866 kubelet[1863]: E0702 07:54:07.402841 1863 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jul 2 07:54:07.420077 kubelet[1863]: E0702 07:54:07.420048 1863 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:54:07.426511 kubelet[1863]: I0702 07:54:07.426486 1863 policy_none.go:49] "None policy: Start" Jul 2 07:54:07.427164 kubelet[1863]: I0702 07:54:07.427142 1863 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:54:07.427208 kubelet[1863]: I0702 07:54:07.427185 1863 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:54:07.434638 kubelet[1863]: I0702 07:54:07.434582 1863 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:54:07.434000 audit[1863]: AVC avc: denied { mac_admin } for pid=1863 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:07.434000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 2 07:54:07.434000 audit[1863]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0011062a0 a1=c000c6adf8 a2=c001106270 a3=25 items=0 ppid=1 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:07.434000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 2 07:54:07.434872 kubelet[1863]: I0702 07:54:07.434688 1863 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 2 07:54:07.434872 kubelet[1863]: I0702 07:54:07.434852 1863 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:54:07.435612 kubelet[1863]: E0702 07:54:07.435575 1863 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 07:54:07.702314 kubelet[1863]: E0702 07:54:07.702283 1863 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" Jul 2 07:54:07.804716 kubelet[1863]: I0702 07:54:07.804692 1863 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:54:07.805042 kubelet[1863]: E0702 07:54:07.804889 1863 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jul 2 07:54:07.821122 kubelet[1863]: I0702 07:54:07.821087 1863 topology_manager.go:215] "Topology Admit Handler" podUID="a0d5ef88b73f061f35b83016799c82e4" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:54:07.822073 kubelet[1863]: I0702 07:54:07.822050 1863 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:54:07.822582 kubelet[1863]: I0702 07:54:07.822560 1863 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:54:07.904117 kubelet[1863]: I0702 07:54:07.904090 1863 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0d5ef88b73f061f35b83016799c82e4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0d5ef88b73f061f35b83016799c82e4\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:54:07.904117 kubelet[1863]: I0702 07:54:07.904122 1863 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0d5ef88b73f061f35b83016799c82e4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0d5ef88b73f061f35b83016799c82e4\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:54:07.904304 kubelet[1863]: I0702 07:54:07.904146 1863 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:54:07.904304 kubelet[1863]: I0702 07:54:07.904165 1863 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:54:07.904304 kubelet[1863]: I0702 07:54:07.904186 1863 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0d5ef88b73f061f35b83016799c82e4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0d5ef88b73f061f35b83016799c82e4\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:54:07.904304 kubelet[1863]: I0702 07:54:07.904208 1863 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:54:07.904304 kubelet[1863]: I0702 07:54:07.904228 1863 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:54:07.904433 kubelet[1863]: I0702 07:54:07.904247 1863 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:54:07.904433 kubelet[1863]: I0702 07:54:07.904266 1863 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:54:07.951472 kubelet[1863]: W0702 07:54:07.951446 1863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:07.951536 kubelet[1863]: E0702 07:54:07.951478 1863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:08.117427 kubelet[1863]: W0702 07:54:08.117320 1863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:08.117427 kubelet[1863]: E0702 07:54:08.117377 1863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:08.125307 kubelet[1863]: E0702 07:54:08.125281 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:08.125899 env[1302]: time="2024-07-02T07:54:08.125860753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0d5ef88b73f061f35b83016799c82e4,Namespace:kube-system,Attempt:0,}" Jul 2 07:54:08.126912 kubelet[1863]: E0702 07:54:08.126888 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:08.127178 kubelet[1863]: E0702 07:54:08.127164 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:08.127231 env[1302]: time="2024-07-02T07:54:08.127166927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 07:54:08.127529 env[1302]: time="2024-07-02T07:54:08.127499260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 07:54:08.280821 kubelet[1863]: W0702 07:54:08.280756 1863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:08.280885 kubelet[1863]: E0702 07:54:08.280835 1863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:08.503027 kubelet[1863]: E0702 07:54:08.503000 1863 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="1.6s" Jul 2 07:54:08.535536 kubelet[1863]: W0702 07:54:08.535465 1863 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:08.535536 kubelet[1863]: E0702 07:54:08.535519 1863 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 2 07:54:08.606716 kubelet[1863]: I0702 07:54:08.606681 1863 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:54:08.606931 kubelet[1863]: E0702 07:54:08.606909 1863 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jul 2 07:54:08.664358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766902287.mount: Deactivated successfully. Jul 2 07:54:08.669454 env[1302]: time="2024-07-02T07:54:08.669414042Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:08.672452 env[1302]: time="2024-07-02T07:54:08.672424052Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:08.674294 env[1302]: time="2024-07-02T07:54:08.674260112Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:08.675255 env[1302]: time="2024-07-02T07:54:08.675226530Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:08.677170 env[1302]: time="2024-07-02T07:54:08.677141500Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:08.678364 env[1302]: time="2024-07-02T07:54:08.678345654Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:08.679400 env[1302]: time="2024-07-02T07:54:08.679382185Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:08.680516 env[1302]: time="2024-07-02T07:54:08.680497447Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:08.681568 env[1302]: time="2024-07-02T07:54:08.681550298Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:08.682790 env[1302]: time="2024-07-02T07:54:08.682773559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:08.683954 env[1302]: time="2024-07-02T07:54:08.683928842Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:08.685389 env[1302]: time="2024-07-02T07:54:08.685363137Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:08.710694 env[1302]: time="2024-07-02T07:54:08.710054126Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:54:08.710694 env[1302]: time="2024-07-02T07:54:08.710559879Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:54:08.710694 env[1302]: time="2024-07-02T07:54:08.710570813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:54:08.712702 env[1302]: time="2024-07-02T07:54:08.712629883Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c66d474850690c886ea9d586ba95c2fb459d70a3889c5685c47a53cc14a2abd5 pid=1907 runtime=io.containerd.runc.v2 Jul 2 07:54:08.714567 env[1302]: time="2024-07-02T07:54:08.714511441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:54:08.714567 env[1302]: time="2024-07-02T07:54:08.714538905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:54:08.714567 env[1302]: time="2024-07-02T07:54:08.714548885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:54:08.715214 env[1302]: time="2024-07-02T07:54:08.714682974Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9dfb30e8ab79f11771946b97653b3d1c517d9eba01038999ce38bd7b877e7f2d pid=1922 runtime=io.containerd.runc.v2 Jul 2 07:54:08.716090 env[1302]: time="2024-07-02T07:54:08.716027284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:54:08.716090 env[1302]: time="2024-07-02T07:54:08.716073285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:54:08.716195 env[1302]: time="2024-07-02T07:54:08.716086766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:54:08.716291 env[1302]: time="2024-07-02T07:54:08.716248268Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/216c28e283cf18cf88453b731e96dca98d82c42000394b33f48efd4a8d95e55a pid=1934 runtime=io.containerd.runc.v2 Jul 2 07:54:08.763383 env[1302]: time="2024-07-02T07:54:08.762556862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"216c28e283cf18cf88453b731e96dca98d82c42000394b33f48efd4a8d95e55a\"" Jul 2 07:54:08.764181 kubelet[1863]: E0702 07:54:08.763862 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:08.768759 env[1302]: time="2024-07-02T07:54:08.768724554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"c66d474850690c886ea9d586ba95c2fb459d70a3889c5685c47a53cc14a2abd5\"" Jul 2 07:54:08.769273 env[1302]: time="2024-07-02T07:54:08.769248161Z" level=info msg="CreateContainer within sandbox \"216c28e283cf18cf88453b731e96dca98d82c42000394b33f48efd4a8d95e55a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:54:08.769647 kubelet[1863]: E0702 07:54:08.769627 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:08.771273 env[1302]: time="2024-07-02T07:54:08.771250208Z" level=info msg="CreateContainer within sandbox \"c66d474850690c886ea9d586ba95c2fb459d70a3889c5685c47a53cc14a2abd5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:54:08.772581 env[1302]: time="2024-07-02T07:54:08.772557084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a0d5ef88b73f061f35b83016799c82e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dfb30e8ab79f11771946b97653b3d1c517d9eba01038999ce38bd7b877e7f2d\"" Jul 2 07:54:08.773260 kubelet[1863]: E0702 07:54:08.773173 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:08.774848 env[1302]: time="2024-07-02T07:54:08.774828873Z" level=info msg="CreateContainer within sandbox \"9dfb30e8ab79f11771946b97653b3d1c517d9eba01038999ce38bd7b877e7f2d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:54:08.791482 env[1302]: time="2024-07-02T07:54:08.791445625Z" level=info msg="CreateContainer within sandbox \"c66d474850690c886ea9d586ba95c2fb459d70a3889c5685c47a53cc14a2abd5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bef199785955e77ee6345800add9daf1a4a3a19d80854447ff1b9d6e4fabd90e\"" Jul 2 07:54:08.791958 env[1302]: time="2024-07-02T07:54:08.791938799Z" level=info msg="StartContainer for \"bef199785955e77ee6345800add9daf1a4a3a19d80854447ff1b9d6e4fabd90e\"" Jul 2 07:54:08.800552 env[1302]: time="2024-07-02T07:54:08.800512760Z" level=info msg="CreateContainer within sandbox \"216c28e283cf18cf88453b731e96dca98d82c42000394b33f48efd4a8d95e55a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"429740117f9db6ac07c017fac0f1d8a3f5aca96b9538d7fdb945b7431ab6fe08\"" Jul 2 07:54:08.800939 env[1302]: time="2024-07-02T07:54:08.800908346Z" level=info msg="StartContainer for \"429740117f9db6ac07c017fac0f1d8a3f5aca96b9538d7fdb945b7431ab6fe08\"" Jul 2 07:54:08.803830 env[1302]: time="2024-07-02T07:54:08.803796647Z" level=info msg="CreateContainer within sandbox \"9dfb30e8ab79f11771946b97653b3d1c517d9eba01038999ce38bd7b877e7f2d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8cceb0fd140031f0de65734fcb7ea210740115922abfe83b4b983b2fd1919414\"" Jul 2 07:54:08.804672 env[1302]: time="2024-07-02T07:54:08.804618633Z" level=info msg="StartContainer for \"8cceb0fd140031f0de65734fcb7ea210740115922abfe83b4b983b2fd1919414\"" Jul 2 07:54:08.849748 env[1302]: time="2024-07-02T07:54:08.849313245Z" level=info msg="StartContainer for \"bef199785955e77ee6345800add9daf1a4a3a19d80854447ff1b9d6e4fabd90e\" returns successfully" Jul 2 07:54:08.866129 env[1302]: time="2024-07-02T07:54:08.866095662Z" level=info msg="StartContainer for \"8cceb0fd140031f0de65734fcb7ea210740115922abfe83b4b983b2fd1919414\" returns successfully" Jul 2 07:54:08.872771 env[1302]: time="2024-07-02T07:54:08.872725022Z" level=info msg="StartContainer for \"429740117f9db6ac07c017fac0f1d8a3f5aca96b9538d7fdb945b7431ab6fe08\" returns successfully" Jul 2 07:54:09.125172 kubelet[1863]: E0702 07:54:09.125086 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:09.127031 kubelet[1863]: E0702 07:54:09.127012 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:09.128590 kubelet[1863]: E0702 07:54:09.128572 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:10.114740 kubelet[1863]: E0702 07:54:10.114688 1863 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 07:54:10.129962 kubelet[1863]: E0702 07:54:10.129932 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:10.208872 kubelet[1863]: I0702 07:54:10.208843 1863 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:54:10.211255 kubelet[1863]: I0702 07:54:10.211229 1863 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 07:54:10.231928 kubelet[1863]: E0702 07:54:10.231888 1863 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:54:11.094694 kubelet[1863]: I0702 07:54:11.094645 1863 apiserver.go:52] "Watching apiserver" Jul 2 07:54:11.100714 kubelet[1863]: I0702 07:54:11.100681 1863 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:54:11.713345 kubelet[1863]: E0702 07:54:11.713328 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:12.131396 kubelet[1863]: E0702 07:54:12.131260 1863 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:12.820411 systemd[1]: Reloading. Jul 2 07:54:12.878528 /usr/lib/systemd/system-generators/torcx-generator[2160]: time="2024-07-02T07:54:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:54:12.878553 /usr/lib/systemd/system-generators/torcx-generator[2160]: time="2024-07-02T07:54:12Z" level=info msg="torcx already run" Jul 2 07:54:13.116345 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:54:13.116360 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:54:13.135225 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:54:13.204004 kubelet[1863]: I0702 07:54:13.203984 1863 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:54:13.204083 systemd[1]: Stopping kubelet.service... Jul 2 07:54:13.222945 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:54:13.223196 systemd[1]: Stopped kubelet.service. Jul 2 07:54:13.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:13.224150 kernel: kauditd_printk_skb: 43 callbacks suppressed Jul 2 07:54:13.224215 kernel: audit: type=1131 audit(1719906853.222:215): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:13.224679 systemd[1]: Starting kubelet.service... Jul 2 07:54:13.295789 systemd[1]: Started kubelet.service. Jul 2 07:54:13.300631 kernel: audit: type=1130 audit(1719906853.296:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:13.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:13.343861 kubelet[2216]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:54:13.343861 kubelet[2216]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:54:13.343861 kubelet[2216]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:54:13.344358 kubelet[2216]: I0702 07:54:13.343910 2216 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:54:13.347444 kubelet[2216]: I0702 07:54:13.347429 2216 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:54:13.347444 kubelet[2216]: I0702 07:54:13.347444 2216 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:54:13.347606 kubelet[2216]: I0702 07:54:13.347582 2216 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:54:13.348899 kubelet[2216]: I0702 07:54:13.348880 2216 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:54:13.349655 kubelet[2216]: I0702 07:54:13.349639 2216 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:54:13.355628 kubelet[2216]: I0702 07:54:13.355605 2216 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:54:13.356089 kubelet[2216]: I0702 07:54:13.356078 2216 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:54:13.356281 kubelet[2216]: I0702 07:54:13.356265 2216 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:54:13.356414 kubelet[2216]: I0702 07:54:13.356400 2216 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:54:13.356486 kubelet[2216]: I0702 07:54:13.356471 2216 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:54:13.356622 kubelet[2216]: I0702 07:54:13.356586 2216 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:54:13.356773 kubelet[2216]: I0702 07:54:13.356762 2216 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:54:13.356850 kubelet[2216]: I0702 07:54:13.356834 2216 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:54:13.356949 kubelet[2216]: I0702 07:54:13.356934 2216 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:54:13.357039 kubelet[2216]: I0702 07:54:13.357024 2216 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:54:13.363944 kubelet[2216]: I0702 07:54:13.363528 2216 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:54:13.364099 kubelet[2216]: I0702 07:54:13.364072 2216 server.go:1232] "Started kubelet" Jul 2 07:54:13.367460 kubelet[2216]: I0702 07:54:13.364316 2216 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:54:13.367460 kubelet[2216]: I0702 07:54:13.364420 2216 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:54:13.367460 kubelet[2216]: I0702 07:54:13.364614 2216 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:54:13.367460 kubelet[2216]: E0702 07:54:13.365120 2216 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:54:13.367460 kubelet[2216]: E0702 07:54:13.365137 2216 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:54:13.367460 kubelet[2216]: I0702 07:54:13.365157 2216 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:54:13.367460 kubelet[2216]: I0702 07:54:13.367266 2216 kubelet.go:1386] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 2 07:54:13.367460 kubelet[2216]: I0702 07:54:13.367291 2216 kubelet.go:1390] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 2 07:54:13.367460 kubelet[2216]: I0702 07:54:13.367320 2216 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:54:13.366000 audit[2216]: AVC avc: denied { mac_admin } for pid=2216 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:13.371081 kubelet[2216]: I0702 07:54:13.371064 2216 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:54:13.371163 kubelet[2216]: I0702 07:54:13.371147 2216 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:54:13.371255 kubelet[2216]: I0702 07:54:13.371240 2216 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:54:13.378435 kernel: audit: type=1400 audit(1719906853.366:217): avc: denied { mac_admin } for pid=2216 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:13.378498 kernel: audit: type=1401 audit(1719906853.366:217): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 2 07:54:13.378518 kernel: audit: type=1300 audit(1719906853.366:217): arch=c000003e syscall=188 success=no exit=-22 a0=c000359ec0 a1=c000a950c8 a2=c000359e90 a3=25 items=0 ppid=1 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:13.366000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 2 07:54:13.366000 audit[2216]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000359ec0 a1=c000a950c8 a2=c000359e90 a3=25 items=0 ppid=1 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:13.382981 kernel: audit: type=1327 audit(1719906853.366:217): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 2 07:54:13.366000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 2 07:54:13.386300 kernel: audit: type=1400 audit(1719906853.366:218): avc: denied { mac_admin } for pid=2216 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:13.366000 audit[2216]: AVC avc: denied { mac_admin } for pid=2216 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:13.388264 kernel: audit: type=1401 audit(1719906853.366:218): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 2 07:54:13.366000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 2 07:54:13.392976 kernel: audit: type=1300 audit(1719906853.366:218): arch=c000003e syscall=188 success=no exit=-22 a0=c00085af00 a1=c000a950e0 a2=c000359f50 a3=25 items=0 ppid=1 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:13.366000 audit[2216]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00085af00 a1=c000a950e0 a2=c000359f50 a3=25 items=0 ppid=1 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:13.393097 kubelet[2216]: I0702 07:54:13.392304 2216 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:54:13.397323 kernel: audit: type=1327 audit(1719906853.366:218): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 2 07:54:13.366000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 2 07:54:13.397405 kubelet[2216]: I0702 07:54:13.393285 2216 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:54:13.397405 kubelet[2216]: I0702 07:54:13.393299 2216 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:54:13.397405 kubelet[2216]: I0702 07:54:13.393315 2216 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:54:13.397405 kubelet[2216]: E0702 07:54:13.393369 2216 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:54:13.441820 kubelet[2216]: I0702 07:54:13.441786 2216 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:54:13.441820 kubelet[2216]: I0702 07:54:13.441804 2216 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:54:13.441820 kubelet[2216]: I0702 07:54:13.441818 2216 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:54:13.441998 kubelet[2216]: I0702 07:54:13.441980 2216 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:54:13.442040 kubelet[2216]: I0702 07:54:13.442003 2216 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:54:13.442040 kubelet[2216]: I0702 07:54:13.442009 2216 policy_none.go:49] "None policy: Start" Jul 2 07:54:13.442351 kubelet[2216]: I0702 07:54:13.442333 2216 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:54:13.442351 kubelet[2216]: I0702 07:54:13.442350 2216 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:54:13.442482 kubelet[2216]: I0702 07:54:13.442472 2216 state_mem.go:75] "Updated machine memory state" Jul 2 07:54:13.443374 kubelet[2216]: I0702 07:54:13.443358 2216 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:54:13.442000 audit[2216]: AVC avc: denied { mac_admin } for pid=2216 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:13.442000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 2 07:54:13.442000 audit[2216]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b3d050 a1=c00089f1d0 a2=c000b3d020 a3=25 items=0 ppid=1 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:13.442000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 2 07:54:13.443562 kubelet[2216]: I0702 07:54:13.443416 2216 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 2 07:54:13.445701 kubelet[2216]: I0702 07:54:13.445683 2216 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:54:13.493656 kubelet[2216]: I0702 07:54:13.493619 2216 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:54:13.493747 kubelet[2216]: I0702 07:54:13.493702 2216 topology_manager.go:215] "Topology Admit Handler" podUID="a0d5ef88b73f061f35b83016799c82e4" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:54:13.493747 kubelet[2216]: I0702 07:54:13.493729 2216 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:54:13.498492 kubelet[2216]: E0702 07:54:13.498465 2216 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 07:54:13.549618 kubelet[2216]: I0702 07:54:13.549583 2216 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 07:54:13.553861 kubelet[2216]: I0702 07:54:13.553838 2216 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 07:54:13.554495 kubelet[2216]: I0702 07:54:13.553900 2216 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 07:54:13.672767 kubelet[2216]: I0702 07:54:13.672725 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a0d5ef88b73f061f35b83016799c82e4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a0d5ef88b73f061f35b83016799c82e4\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:54:13.672908 kubelet[2216]: I0702 07:54:13.672790 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:54:13.672908 kubelet[2216]: I0702 07:54:13.672811 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:54:13.672908 kubelet[2216]: I0702 07:54:13.672894 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:54:13.672989 kubelet[2216]: I0702 07:54:13.672937 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a0d5ef88b73f061f35b83016799c82e4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0d5ef88b73f061f35b83016799c82e4\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:54:13.672989 kubelet[2216]: I0702 07:54:13.672971 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:54:13.673041 kubelet[2216]: I0702 07:54:13.673016 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:54:13.673065 kubelet[2216]: I0702 07:54:13.673052 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:54:13.673102 kubelet[2216]: I0702 07:54:13.673071 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a0d5ef88b73f061f35b83016799c82e4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a0d5ef88b73f061f35b83016799c82e4\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:54:13.798376 kubelet[2216]: E0702 07:54:13.798332 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:13.799382 kubelet[2216]: E0702 07:54:13.799360 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:13.799706 kubelet[2216]: E0702 07:54:13.799673 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:14.361639 kubelet[2216]: I0702 07:54:14.357974 2216 apiserver.go:52] "Watching apiserver" Jul 2 07:54:14.378763 kubelet[2216]: I0702 07:54:14.378718 2216 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:54:14.404327 kubelet[2216]: E0702 07:54:14.404291 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:14.411186 kubelet[2216]: E0702 07:54:14.411148 2216 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 2 07:54:14.411535 kubelet[2216]: E0702 07:54:14.411514 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:14.412309 kubelet[2216]: E0702 07:54:14.412268 2216 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 07:54:14.412723 kubelet[2216]: E0702 07:54:14.412702 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:14.428240 kubelet[2216]: I0702 07:54:14.428218 2216 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.428163587 podCreationTimestamp="2024-07-02 07:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:54:14.427818239 +0000 UTC m=+1.126636915" watchObservedRunningTime="2024-07-02 07:54:14.428163587 +0000 UTC m=+1.126982263" Jul 2 07:54:14.441800 kubelet[2216]: I0702 07:54:14.441771 2216 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.441736328 podCreationTimestamp="2024-07-02 07:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:54:14.441567771 +0000 UTC m=+1.140386447" watchObservedRunningTime="2024-07-02 07:54:14.441736328 +0000 UTC m=+1.140555004" Jul 2 07:54:14.441918 kubelet[2216]: I0702 07:54:14.441851 2216 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.4418372019999999 podCreationTimestamp="2024-07-02 07:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:54:14.434538536 +0000 UTC m=+1.133357202" watchObservedRunningTime="2024-07-02 07:54:14.441837202 +0000 UTC m=+1.140655878" Jul 2 07:54:15.405797 kubelet[2216]: E0702 07:54:15.405776 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:15.406178 kubelet[2216]: E0702 07:54:15.405945 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:15.406336 kubelet[2216]: E0702 07:54:15.405970 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:17.724149 kubelet[2216]: E0702 07:54:17.724105 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:18.000000 audit[1467]: USER_END pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:54:18.000000 audit[1467]: CRED_DISP pid=1467 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:54:18.001670 sudo[1467]: pam_unix(sudo:session): session closed for user root Jul 2 07:54:18.003029 sshd[1461]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:18.002000 audit[1461]: USER_END pid=1461 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:18.002000 audit[1461]: CRED_DISP pid=1461 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:18.005216 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:58170.service: Deactivated successfully. Jul 2 07:54:18.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.138:22-10.0.0.1:58170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:18.006335 systemd-logind[1289]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:54:18.006382 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:54:18.007651 systemd-logind[1289]: Removed session 7. Jul 2 07:54:21.115054 update_engine[1290]: I0702 07:54:21.115011 1290 update_attempter.cc:509] Updating boot flags... Jul 2 07:54:21.234978 kubelet[2216]: E0702 07:54:21.234952 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:21.413395 kubelet[2216]: E0702 07:54:21.413282 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:25.060502 kubelet[2216]: E0702 07:54:25.060478 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:26.058416 kubelet[2216]: I0702 07:54:26.058381 2216 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:54:26.058906 env[1302]: time="2024-07-02T07:54:26.058854029Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:54:26.059149 kubelet[2216]: I0702 07:54:26.059068 2216 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:54:26.799271 kubelet[2216]: I0702 07:54:26.799244 2216 topology_manager.go:215] "Topology Admit Handler" podUID="e5ceb7e9-198a-42be-94a7-c556a0f7aa60" podNamespace="kube-system" podName="kube-proxy-fs4sf" Jul 2 07:54:26.943619 kubelet[2216]: I0702 07:54:26.943552 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrdk9\" (UniqueName: \"kubernetes.io/projected/e5ceb7e9-198a-42be-94a7-c556a0f7aa60-kube-api-access-vrdk9\") pod \"kube-proxy-fs4sf\" (UID: \"e5ceb7e9-198a-42be-94a7-c556a0f7aa60\") " pod="kube-system/kube-proxy-fs4sf" Jul 2 07:54:26.943619 kubelet[2216]: I0702 07:54:26.943610 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5ceb7e9-198a-42be-94a7-c556a0f7aa60-lib-modules\") pod \"kube-proxy-fs4sf\" (UID: \"e5ceb7e9-198a-42be-94a7-c556a0f7aa60\") " pod="kube-system/kube-proxy-fs4sf" Jul 2 07:54:26.943619 kubelet[2216]: I0702 07:54:26.943633 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e5ceb7e9-198a-42be-94a7-c556a0f7aa60-kube-proxy\") pod \"kube-proxy-fs4sf\" (UID: \"e5ceb7e9-198a-42be-94a7-c556a0f7aa60\") " pod="kube-system/kube-proxy-fs4sf" Jul 2 07:54:26.943851 kubelet[2216]: I0702 07:54:26.943649 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5ceb7e9-198a-42be-94a7-c556a0f7aa60-xtables-lock\") pod \"kube-proxy-fs4sf\" (UID: \"e5ceb7e9-198a-42be-94a7-c556a0f7aa60\") " pod="kube-system/kube-proxy-fs4sf" Jul 2 07:54:27.009242 kubelet[2216]: I0702 07:54:27.009197 2216 topology_manager.go:215] "Topology Admit Handler" podUID="685b4287-1366-4588-82a9-fe848c9a3bb4" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-k44rr" Jul 2 07:54:27.101857 kubelet[2216]: E0702 07:54:27.101751 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:27.102176 env[1302]: time="2024-07-02T07:54:27.102128896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fs4sf,Uid:e5ceb7e9-198a-42be-94a7-c556a0f7aa60,Namespace:kube-system,Attempt:0,}" Jul 2 07:54:27.144754 kubelet[2216]: I0702 07:54:27.144732 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrv4z\" (UniqueName: \"kubernetes.io/projected/685b4287-1366-4588-82a9-fe848c9a3bb4-kube-api-access-rrv4z\") pod \"tigera-operator-76c4974c85-k44rr\" (UID: \"685b4287-1366-4588-82a9-fe848c9a3bb4\") " pod="tigera-operator/tigera-operator-76c4974c85-k44rr" Jul 2 07:54:27.144905 kubelet[2216]: I0702 07:54:27.144780 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/685b4287-1366-4588-82a9-fe848c9a3bb4-var-lib-calico\") pod \"tigera-operator-76c4974c85-k44rr\" (UID: \"685b4287-1366-4588-82a9-fe848c9a3bb4\") " pod="tigera-operator/tigera-operator-76c4974c85-k44rr" Jul 2 07:54:27.233631 env[1302]: time="2024-07-02T07:54:27.233545976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:54:27.233631 env[1302]: time="2024-07-02T07:54:27.233603298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:54:27.233631 env[1302]: time="2024-07-02T07:54:27.233616017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:54:27.233908 env[1302]: time="2024-07-02T07:54:27.233835826Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4325a5269ab4559d2ed16ddebdf42c2ea28337e264fee66687efe00037248dbd pid=2338 runtime=io.containerd.runc.v2 Jul 2 07:54:27.265412 env[1302]: time="2024-07-02T07:54:27.265356920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fs4sf,Uid:e5ceb7e9-198a-42be-94a7-c556a0f7aa60,Namespace:kube-system,Attempt:0,} returns sandbox id \"4325a5269ab4559d2ed16ddebdf42c2ea28337e264fee66687efe00037248dbd\"" Jul 2 07:54:27.266043 kubelet[2216]: E0702 07:54:27.266017 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:27.268822 env[1302]: time="2024-07-02T07:54:27.268771506Z" level=info msg="CreateContainer within sandbox \"4325a5269ab4559d2ed16ddebdf42c2ea28337e264fee66687efe00037248dbd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:54:27.282806 env[1302]: time="2024-07-02T07:54:27.282764195Z" level=info msg="CreateContainer within sandbox \"4325a5269ab4559d2ed16ddebdf42c2ea28337e264fee66687efe00037248dbd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f099e7031d73ecbd8a094dbbe23130a68913835b9f167ec8ad55d8352c75cf7b\"" Jul 2 07:54:27.283315 env[1302]: time="2024-07-02T07:54:27.283253467Z" level=info msg="StartContainer for \"f099e7031d73ecbd8a094dbbe23130a68913835b9f167ec8ad55d8352c75cf7b\"" Jul 2 07:54:27.312010 env[1302]: time="2024-07-02T07:54:27.311950361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-k44rr,Uid:685b4287-1366-4588-82a9-fe848c9a3bb4,Namespace:tigera-operator,Attempt:0,}" Jul 2 07:54:27.327646 env[1302]: time="2024-07-02T07:54:27.327234686Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:54:27.327646 env[1302]: time="2024-07-02T07:54:27.327283289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:54:27.327646 env[1302]: time="2024-07-02T07:54:27.327292881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:54:27.327646 env[1302]: time="2024-07-02T07:54:27.327479392Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b8a6c5c9b8cd56dde84506cfe5250f3b11a8db74aaee8f0417945b69f9a3a97 pid=2409 runtime=io.containerd.runc.v2 Jul 2 07:54:27.337020 env[1302]: time="2024-07-02T07:54:27.336973808Z" level=info msg="StartContainer for \"f099e7031d73ecbd8a094dbbe23130a68913835b9f167ec8ad55d8352c75cf7b\" returns successfully" Jul 2 07:54:27.375127 env[1302]: time="2024-07-02T07:54:27.374317187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-k44rr,Uid:685b4287-1366-4588-82a9-fe848c9a3bb4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1b8a6c5c9b8cd56dde84506cfe5250f3b11a8db74aaee8f0417945b69f9a3a97\"" Jul 2 07:54:27.377206 env[1302]: time="2024-07-02T07:54:27.377175067Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 07:54:27.397736 kernel: kauditd_printk_skb: 9 callbacks suppressed Jul 2 07:54:27.397854 kernel: audit: type=1325 audit(1719906867.393:225): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.397875 kernel: audit: type=1300 audit(1719906867.393:225): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0947bd20 a2=0 a3=7ffd0947bd0c items=0 ppid=2389 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.393000 audit[2471]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.393000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd0947bd20 a2=0 a3=7ffd0947bd0c items=0 ppid=2389 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 07:54:27.404899 kernel: audit: type=1327 audit(1719906867.393:225): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 07:54:27.404939 kernel: audit: type=1325 audit(1719906867.396:226): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.396000 audit[2472]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.396000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe010f4e10 a2=0 a3=7ffe010f4dfc items=0 ppid=2389 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.411634 kernel: audit: type=1300 audit(1719906867.396:226): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe010f4e10 a2=0 a3=7ffe010f4dfc items=0 ppid=2389 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.411676 kernel: audit: type=1327 audit(1719906867.396:226): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 07:54:27.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 07:54:27.396000 audit[2473]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.416003 kernel: audit: type=1325 audit(1719906867.396:227): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.416054 kernel: audit: type=1300 audit(1719906867.396:227): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe43500220 a2=0 a3=7ffe4350020c items=0 ppid=2389 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.396000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe43500220 a2=0 a3=7ffe4350020c items=0 ppid=2389 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.396000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 07:54:27.423300 kernel: audit: type=1327 audit(1719906867.396:227): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 07:54:27.423351 kernel: audit: type=1325 audit(1719906867.398:228): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.398000 audit[2475]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.423564 kubelet[2216]: E0702 07:54:27.423534 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:27.398000 audit[2475]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd14304ba0 a2=0 a3=7ffd14304b8c items=0 ppid=2389 pid=2475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.398000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 07:54:27.400000 audit[2474]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.400000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffded25ad50 a2=0 a3=7ffded25ad3c items=0 ppid=2389 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.400000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 07:54:27.401000 audit[2476]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.401000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc2ac53c0 a2=0 a3=7ffcc2ac53ac items=0 ppid=2389 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.401000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 07:54:27.429350 kubelet[2216]: I0702 07:54:27.429320 2216 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fs4sf" podStartSLOduration=1.429284863 podCreationTimestamp="2024-07-02 07:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:54:27.42896349 +0000 UTC m=+14.127782196" watchObservedRunningTime="2024-07-02 07:54:27.429284863 +0000 UTC m=+14.128103539" Jul 2 07:54:27.495000 audit[2477]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.495000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffdffbfd60 a2=0 a3=7fffdffbfd4c items=0 ppid=2389 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.495000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 07:54:27.497000 audit[2479]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.497000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffea92f1b50 a2=0 a3=7ffea92f1b3c items=0 ppid=2389 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 2 07:54:27.500000 audit[2482]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.500000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff77c068d0 a2=0 a3=7fff77c068bc items=0 ppid=2389 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.500000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 2 07:54:27.501000 audit[2483]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.501000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9da14a10 a2=0 a3=7fff9da149fc items=0 ppid=2389 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.501000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 07:54:27.503000 audit[2485]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.503000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe33f16790 a2=0 a3=7ffe33f1677c items=0 ppid=2389 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.503000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 07:54:27.504000 audit[2486]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.504000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff05c65640 a2=0 a3=7fff05c6562c items=0 ppid=2389 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.504000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 07:54:27.506000 audit[2488]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.506000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc20b045a0 a2=0 a3=7ffc20b0458c items=0 ppid=2389 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 07:54:27.509000 audit[2491]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.509000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe962f60e0 a2=0 a3=7ffe962f60cc items=0 ppid=2389 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 2 07:54:27.510000 audit[2492]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.510000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc45c75aa0 a2=0 a3=7ffc45c75a8c items=0 ppid=2389 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.510000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 07:54:27.512000 audit[2494]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.512000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcda4f8600 a2=0 a3=7ffcda4f85ec items=0 ppid=2389 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.512000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 07:54:27.513000 audit[2495]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.513000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa3fadc60 a2=0 a3=7fffa3fadc4c items=0 ppid=2389 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 07:54:27.515000 audit[2497]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.515000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcdd87e8c0 a2=0 a3=7ffcdd87e8ac items=0 ppid=2389 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.515000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 07:54:27.520000 audit[2500]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.520000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffea096a750 a2=0 a3=7ffea096a73c items=0 ppid=2389 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.520000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 07:54:27.523000 audit[2503]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.523000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd139a4d60 a2=0 a3=7ffd139a4d4c items=0 ppid=2389 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.523000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 07:54:27.524000 audit[2504]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.524000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff6f8eea30 a2=0 a3=7fff6f8eea1c items=0 ppid=2389 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.524000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 07:54:27.526000 audit[2506]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.526000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd783e7ec0 a2=0 a3=7ffd783e7eac items=0 ppid=2389 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.526000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 07:54:27.529000 audit[2509]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.529000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffefbb2b750 a2=0 a3=7ffefbb2b73c items=0 ppid=2389 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.529000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 07:54:27.529000 audit[2510]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.529000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef2f30f50 a2=0 a3=7ffef2f30f3c items=0 ppid=2389 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.529000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 07:54:27.531000 audit[2512]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:54:27.531000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffedd9c2790 a2=0 a3=7ffedd9c277c items=0 ppid=2389 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.531000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 07:54:27.548000 audit[2518]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:27.548000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe44e87210 a2=0 a3=7ffe44e871fc items=0 ppid=2389 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.548000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:27.556000 audit[2518]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:27.556000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe44e87210 a2=0 a3=7ffe44e871fc items=0 ppid=2389 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.556000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:27.557000 audit[2524]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.557000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd5ae06f70 a2=0 a3=7ffd5ae06f5c items=0 ppid=2389 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.557000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 07:54:27.559000 audit[2526]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.559000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff442330a0 a2=0 a3=7fff4423308c items=0 ppid=2389 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.559000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 2 07:54:27.562000 audit[2529]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.562000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd9bb8abd0 a2=0 a3=7ffd9bb8abbc items=0 ppid=2389 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.562000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 2 07:54:27.563000 audit[2530]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.563000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffcd8e0fa0 a2=0 a3=7fffcd8e0f8c items=0 ppid=2389 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.563000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 07:54:27.566000 audit[2532]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.566000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe3aedf6e0 a2=0 a3=7ffe3aedf6cc items=0 ppid=2389 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 07:54:27.567000 audit[2533]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.567000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5b1b3fe0 a2=0 a3=7fff5b1b3fcc items=0 ppid=2389 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.567000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 07:54:27.568000 audit[2535]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.568000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff1fe77cf0 a2=0 a3=7fff1fe77cdc items=0 ppid=2389 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.568000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 2 07:54:27.571000 audit[2538]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.571000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff5b2ed640 a2=0 a3=7fff5b2ed62c items=0 ppid=2389 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.571000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 07:54:27.572000 audit[2539]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.572000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa674dc50 a2=0 a3=7fffa674dc3c items=0 ppid=2389 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.572000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 07:54:27.574000 audit[2541]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.574000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe01896260 a2=0 a3=7ffe0189624c items=0 ppid=2389 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.574000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 07:54:27.574000 audit[2542]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.574000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffceb1a5a0 a2=0 a3=7fffceb1a58c items=0 ppid=2389 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.574000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 07:54:27.576000 audit[2544]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.576000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe932d1120 a2=0 a3=7ffe932d110c items=0 ppid=2389 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.576000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 07:54:27.579000 audit[2547]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.579000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffee0a43b10 a2=0 a3=7ffee0a43afc items=0 ppid=2389 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.579000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 07:54:27.581000 audit[2550]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2550 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.581000 audit[2550]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc0e080790 a2=0 a3=7ffc0e08077c items=0 ppid=2389 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.581000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 2 07:54:27.582000 audit[2551]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.582000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd5b6dd750 a2=0 a3=7ffd5b6dd73c items=0 ppid=2389 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 07:54:27.584000 audit[2553]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.584000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffdb6e75660 a2=0 a3=7ffdb6e7564c items=0 ppid=2389 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.584000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 07:54:27.587000 audit[2556]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.587000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff8abba9b0 a2=0 a3=7fff8abba99c items=0 ppid=2389 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 07:54:27.587000 audit[2557]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.587000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc6e2e6b0 a2=0 a3=7ffcc6e2e69c items=0 ppid=2389 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 07:54:27.589000 audit[2559]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.589000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff18667b80 a2=0 a3=7fff18667b6c items=0 ppid=2389 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 07:54:27.590000 audit[2560]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.590000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc926a78a0 a2=0 a3=7ffc926a788c items=0 ppid=2389 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.590000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 07:54:27.592000 audit[2562]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.592000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffed70a8060 a2=0 a3=7ffed70a804c items=0 ppid=2389 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.592000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 07:54:27.594000 audit[2565]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2565 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:54:27.594000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff68face50 a2=0 a3=7fff68face3c items=0 ppid=2389 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.594000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 07:54:27.596000 audit[2567]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2567 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 07:54:27.596000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffcae9955f0 a2=0 a3=7ffcae9955dc items=0 ppid=2389 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.596000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:27.597000 audit[2567]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2567 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 07:54:27.597000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffcae9955f0 a2=0 a3=7ffcae9955dc items=0 ppid=2389 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:27.597000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:27.728525 kubelet[2216]: E0702 07:54:27.728506 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:28.545793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4260445105.mount: Deactivated successfully. Jul 2 07:54:29.164681 env[1302]: time="2024-07-02T07:54:29.164618614Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.34.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:29.166280 env[1302]: time="2024-07-02T07:54:29.166233258Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:29.167825 env[1302]: time="2024-07-02T07:54:29.167800124Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.34.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:29.169285 env[1302]: time="2024-07-02T07:54:29.169225728Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:29.169624 env[1302]: time="2024-07-02T07:54:29.169589765Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 07:54:29.171110 env[1302]: time="2024-07-02T07:54:29.171083454Z" level=info msg="CreateContainer within sandbox \"1b8a6c5c9b8cd56dde84506cfe5250f3b11a8db74aaee8f0417945b69f9a3a97\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 07:54:29.181115 env[1302]: time="2024-07-02T07:54:29.181081886Z" level=info msg="CreateContainer within sandbox \"1b8a6c5c9b8cd56dde84506cfe5250f3b11a8db74aaee8f0417945b69f9a3a97\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a5c1af50ee648969bc9b874d8f61eec5da3b576faf86f671114a0a569b72e991\"" Jul 2 07:54:29.181465 env[1302]: time="2024-07-02T07:54:29.181436272Z" level=info msg="StartContainer for \"a5c1af50ee648969bc9b874d8f61eec5da3b576faf86f671114a0a569b72e991\"" Jul 2 07:54:29.216966 env[1302]: time="2024-07-02T07:54:29.216928994Z" level=info msg="StartContainer for \"a5c1af50ee648969bc9b874d8f61eec5da3b576faf86f671114a0a569b72e991\" returns successfully" Jul 2 07:54:31.885000 audit[2608]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:31.885000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd1b95e5b0 a2=0 a3=7ffd1b95e59c items=0 ppid=2389 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:31.885000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:31.886000 audit[2608]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:31.886000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd1b95e5b0 a2=0 a3=0 items=0 ppid=2389 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:31.886000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:31.894000 audit[2610]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2610 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:31.894000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe8631caa0 a2=0 a3=7ffe8631ca8c items=0 ppid=2389 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:31.894000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:31.899000 audit[2610]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2610 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:31.899000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8631caa0 a2=0 a3=0 items=0 ppid=2389 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:31.899000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:32.004746 kubelet[2216]: I0702 07:54:32.004703 2216 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-k44rr" podStartSLOduration=4.210121237 podCreationTimestamp="2024-07-02 07:54:26 +0000 UTC" firstStartedPulling="2024-07-02 07:54:27.375294017 +0000 UTC m=+14.074112693" lastFinishedPulling="2024-07-02 07:54:29.169833029 +0000 UTC m=+15.868651705" observedRunningTime="2024-07-02 07:54:29.433899338 +0000 UTC m=+16.132718015" watchObservedRunningTime="2024-07-02 07:54:32.004660249 +0000 UTC m=+18.703478925" Jul 2 07:54:32.005316 kubelet[2216]: I0702 07:54:32.005294 2216 topology_manager.go:215] "Topology Admit Handler" podUID="38497315-38c5-4c94-9ba8-95c6637d4916" podNamespace="calico-system" podName="calico-typha-cdd78fc86-x5kkn" Jul 2 07:54:32.044245 kubelet[2216]: I0702 07:54:32.044212 2216 topology_manager.go:215] "Topology Admit Handler" podUID="07022ee5-533c-4ad9-a85a-d80ce0376160" podNamespace="calico-system" podName="calico-node-tpm99" Jul 2 07:54:32.154053 kubelet[2216]: I0702 07:54:32.153947 2216 topology_manager.go:215] "Topology Admit Handler" podUID="092f7597-7194-4dd8-8fd0-5b1161264bc5" podNamespace="calico-system" podName="csi-node-driver-xjtw8" Jul 2 07:54:32.154210 kubelet[2216]: E0702 07:54:32.154192 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xjtw8" podUID="092f7597-7194-4dd8-8fd0-5b1161264bc5" Jul 2 07:54:32.181347 kubelet[2216]: I0702 07:54:32.181302 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/07022ee5-533c-4ad9-a85a-d80ce0376160-cni-bin-dir\") pod \"calico-node-tpm99\" (UID: \"07022ee5-533c-4ad9-a85a-d80ce0376160\") " pod="calico-system/calico-node-tpm99" Jul 2 07:54:32.181347 kubelet[2216]: I0702 07:54:32.181353 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38497315-38c5-4c94-9ba8-95c6637d4916-tigera-ca-bundle\") pod \"calico-typha-cdd78fc86-x5kkn\" (UID: \"38497315-38c5-4c94-9ba8-95c6637d4916\") " pod="calico-system/calico-typha-cdd78fc86-x5kkn" Jul 2 07:54:32.181569 kubelet[2216]: I0702 07:54:32.181370 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07022ee5-533c-4ad9-a85a-d80ce0376160-lib-modules\") pod \"calico-node-tpm99\" (UID: \"07022ee5-533c-4ad9-a85a-d80ce0376160\") " pod="calico-system/calico-node-tpm99" Jul 2 07:54:32.181569 kubelet[2216]: I0702 07:54:32.181386 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07022ee5-533c-4ad9-a85a-d80ce0376160-xtables-lock\") pod \"calico-node-tpm99\" (UID: \"07022ee5-533c-4ad9-a85a-d80ce0376160\") " pod="calico-system/calico-node-tpm99" Jul 2 07:54:32.181569 kubelet[2216]: I0702 07:54:32.181404 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/07022ee5-533c-4ad9-a85a-d80ce0376160-node-certs\") pod \"calico-node-tpm99\" (UID: \"07022ee5-533c-4ad9-a85a-d80ce0376160\") " pod="calico-system/calico-node-tpm99" Jul 2 07:54:32.181569 kubelet[2216]: I0702 07:54:32.181420 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nrz9\" (UniqueName: \"kubernetes.io/projected/07022ee5-533c-4ad9-a85a-d80ce0376160-kube-api-access-2nrz9\") pod \"calico-node-tpm99\" (UID: \"07022ee5-533c-4ad9-a85a-d80ce0376160\") " pod="calico-system/calico-node-tpm99" Jul 2 07:54:32.181569 kubelet[2216]: I0702 07:54:32.181435 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/38497315-38c5-4c94-9ba8-95c6637d4916-typha-certs\") pod \"calico-typha-cdd78fc86-x5kkn\" (UID: \"38497315-38c5-4c94-9ba8-95c6637d4916\") " pod="calico-system/calico-typha-cdd78fc86-x5kkn" Jul 2 07:54:32.181744 kubelet[2216]: I0702 07:54:32.181451 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/07022ee5-533c-4ad9-a85a-d80ce0376160-policysync\") pod \"calico-node-tpm99\" (UID: \"07022ee5-533c-4ad9-a85a-d80ce0376160\") " pod="calico-system/calico-node-tpm99" Jul 2 07:54:32.181744 kubelet[2216]: I0702 07:54:32.181466 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07022ee5-533c-4ad9-a85a-d80ce0376160-tigera-ca-bundle\") pod \"calico-node-tpm99\" (UID: \"07022ee5-533c-4ad9-a85a-d80ce0376160\") " pod="calico-system/calico-node-tpm99" Jul 2 07:54:32.181744 kubelet[2216]: I0702 07:54:32.181480 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/07022ee5-533c-4ad9-a85a-d80ce0376160-var-run-calico\") pod \"calico-node-tpm99\" (UID: \"07022ee5-533c-4ad9-a85a-d80ce0376160\") " pod="calico-system/calico-node-tpm99" Jul 2 07:54:32.181744 kubelet[2216]: I0702 07:54:32.181505 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/07022ee5-533c-4ad9-a85a-d80ce0376160-cni-log-dir\") pod \"calico-node-tpm99\" (UID: \"07022ee5-533c-4ad9-a85a-d80ce0376160\") " pod="calico-system/calico-node-tpm99" Jul 2 07:54:32.181744 kubelet[2216]: I0702 07:54:32.181522 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55cc\" (UniqueName: \"kubernetes.io/projected/38497315-38c5-4c94-9ba8-95c6637d4916-kube-api-access-z55cc\") pod \"calico-typha-cdd78fc86-x5kkn\" (UID: \"38497315-38c5-4c94-9ba8-95c6637d4916\") " pod="calico-system/calico-typha-cdd78fc86-x5kkn" Jul 2 07:54:32.181908 kubelet[2216]: I0702 07:54:32.181538 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/07022ee5-533c-4ad9-a85a-d80ce0376160-flexvol-driver-host\") pod \"calico-node-tpm99\" (UID: \"07022ee5-533c-4ad9-a85a-d80ce0376160\") " pod="calico-system/calico-node-tpm99" Jul 2 07:54:32.181908 kubelet[2216]: I0702 07:54:32.181553 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/07022ee5-533c-4ad9-a85a-d80ce0376160-var-lib-calico\") pod \"calico-node-tpm99\" (UID: \"07022ee5-533c-4ad9-a85a-d80ce0376160\") " pod="calico-system/calico-node-tpm99" Jul 2 07:54:32.181908 kubelet[2216]: I0702 07:54:32.181569 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/07022ee5-533c-4ad9-a85a-d80ce0376160-cni-net-dir\") pod \"calico-node-tpm99\" (UID: \"07022ee5-533c-4ad9-a85a-d80ce0376160\") " pod="calico-system/calico-node-tpm99" Jul 2 07:54:32.282775 kubelet[2216]: I0702 07:54:32.282733 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/092f7597-7194-4dd8-8fd0-5b1161264bc5-socket-dir\") pod \"csi-node-driver-xjtw8\" (UID: \"092f7597-7194-4dd8-8fd0-5b1161264bc5\") " pod="calico-system/csi-node-driver-xjtw8" Jul 2 07:54:32.283526 kubelet[2216]: I0702 07:54:32.283492 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kwf5\" (UniqueName: \"kubernetes.io/projected/092f7597-7194-4dd8-8fd0-5b1161264bc5-kube-api-access-9kwf5\") pod \"csi-node-driver-xjtw8\" (UID: \"092f7597-7194-4dd8-8fd0-5b1161264bc5\") " pod="calico-system/csi-node-driver-xjtw8" Jul 2 07:54:32.283586 kubelet[2216]: I0702 07:54:32.283542 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/092f7597-7194-4dd8-8fd0-5b1161264bc5-varrun\") pod \"csi-node-driver-xjtw8\" (UID: \"092f7597-7194-4dd8-8fd0-5b1161264bc5\") " pod="calico-system/csi-node-driver-xjtw8" Jul 2 07:54:32.283586 kubelet[2216]: I0702 07:54:32.283560 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/092f7597-7194-4dd8-8fd0-5b1161264bc5-registration-dir\") pod \"csi-node-driver-xjtw8\" (UID: \"092f7597-7194-4dd8-8fd0-5b1161264bc5\") " pod="calico-system/csi-node-driver-xjtw8" Jul 2 07:54:32.283654 kubelet[2216]: I0702 07:54:32.283608 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/092f7597-7194-4dd8-8fd0-5b1161264bc5-kubelet-dir\") pod \"csi-node-driver-xjtw8\" (UID: \"092f7597-7194-4dd8-8fd0-5b1161264bc5\") " pod="calico-system/csi-node-driver-xjtw8" Jul 2 07:54:32.292684 kubelet[2216]: E0702 07:54:32.292663 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.292820 kubelet[2216]: W0702 07:54:32.292790 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.292968 kubelet[2216]: E0702 07:54:32.292843 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.293065 kubelet[2216]: E0702 07:54:32.293000 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.293065 kubelet[2216]: W0702 07:54:32.293011 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.293065 kubelet[2216]: E0702 07:54:32.293023 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.294509 kubelet[2216]: E0702 07:54:32.294485 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.294509 kubelet[2216]: W0702 07:54:32.294505 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.294586 kubelet[2216]: E0702 07:54:32.294525 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.301846 kubelet[2216]: E0702 07:54:32.298444 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.301846 kubelet[2216]: W0702 07:54:32.298456 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.301846 kubelet[2216]: E0702 07:54:32.298470 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.302689 kubelet[2216]: E0702 07:54:32.302667 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.302689 kubelet[2216]: W0702 07:54:32.302681 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.302689 kubelet[2216]: E0702 07:54:32.302700 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.309796 kubelet[2216]: E0702 07:54:32.309765 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:32.310375 env[1302]: time="2024-07-02T07:54:32.310336216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cdd78fc86-x5kkn,Uid:38497315-38c5-4c94-9ba8-95c6637d4916,Namespace:calico-system,Attempt:0,}" Jul 2 07:54:32.331329 env[1302]: time="2024-07-02T07:54:32.331267882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:54:32.331536 env[1302]: time="2024-07-02T07:54:32.331340283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:54:32.331536 env[1302]: time="2024-07-02T07:54:32.331362913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:54:32.331536 env[1302]: time="2024-07-02T07:54:32.331485757Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7bc486c69edfb29c5d17de86f3f94727335331c214c16a850a13c2ba15dbdd91 pid=2628 runtime=io.containerd.runc.v2 Jul 2 07:54:32.349400 kubelet[2216]: E0702 07:54:32.348487 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:32.349564 env[1302]: time="2024-07-02T07:54:32.348947703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tpm99,Uid:07022ee5-533c-4ad9-a85a-d80ce0376160,Namespace:calico-system,Attempt:0,}" Jul 2 07:54:32.371128 env[1302]: time="2024-07-02T07:54:32.367732977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:54:32.371128 env[1302]: time="2024-07-02T07:54:32.367764697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:54:32.371128 env[1302]: time="2024-07-02T07:54:32.367773829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:54:32.371128 env[1302]: time="2024-07-02T07:54:32.367897263Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad80d2c0c8fd223d244218f079156d4142e8424adf543f8bd347bfd3d05d1e1d pid=2659 runtime=io.containerd.runc.v2 Jul 2 07:54:32.386175 kubelet[2216]: E0702 07:54:32.385170 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.386175 kubelet[2216]: W0702 07:54:32.385188 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.386175 kubelet[2216]: E0702 07:54:32.385212 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.386175 kubelet[2216]: E0702 07:54:32.385362 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.386175 kubelet[2216]: W0702 07:54:32.385369 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.386175 kubelet[2216]: E0702 07:54:32.385380 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.386175 kubelet[2216]: E0702 07:54:32.385495 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.386175 kubelet[2216]: W0702 07:54:32.385501 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.386175 kubelet[2216]: E0702 07:54:32.385509 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.386175 kubelet[2216]: E0702 07:54:32.385653 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.386631 kubelet[2216]: W0702 07:54:32.385661 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.386631 kubelet[2216]: E0702 07:54:32.385671 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.386631 kubelet[2216]: E0702 07:54:32.385816 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.386631 kubelet[2216]: W0702 07:54:32.385822 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.386631 kubelet[2216]: E0702 07:54:32.385831 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.386631 kubelet[2216]: E0702 07:54:32.385970 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.386631 kubelet[2216]: W0702 07:54:32.385976 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.386631 kubelet[2216]: E0702 07:54:32.385995 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.386631 kubelet[2216]: E0702 07:54:32.386097 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.386631 kubelet[2216]: W0702 07:54:32.386102 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.386878 kubelet[2216]: E0702 07:54:32.386111 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.386878 kubelet[2216]: E0702 07:54:32.386229 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.386878 kubelet[2216]: W0702 07:54:32.386243 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.386878 kubelet[2216]: E0702 07:54:32.386252 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.388825 env[1302]: time="2024-07-02T07:54:32.387693440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cdd78fc86-x5kkn,Uid:38497315-38c5-4c94-9ba8-95c6637d4916,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bc486c69edfb29c5d17de86f3f94727335331c214c16a850a13c2ba15dbdd91\"" Jul 2 07:54:32.389082 kubelet[2216]: E0702 07:54:32.388990 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.389082 kubelet[2216]: W0702 07:54:32.388999 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.389082 kubelet[2216]: E0702 07:54:32.389015 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.389285 kubelet[2216]: E0702 07:54:32.389211 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.389285 kubelet[2216]: W0702 07:54:32.389220 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.389285 kubelet[2216]: E0702 07:54:32.389231 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.389488 kubelet[2216]: E0702 07:54:32.389415 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.389488 kubelet[2216]: W0702 07:54:32.389424 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.389488 kubelet[2216]: E0702 07:54:32.389434 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.389756 kubelet[2216]: E0702 07:54:32.389684 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.389756 kubelet[2216]: W0702 07:54:32.389695 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.389756 kubelet[2216]: E0702 07:54:32.389724 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.390019 kubelet[2216]: E0702 07:54:32.389919 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.390019 kubelet[2216]: W0702 07:54:32.389927 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.390090 kubelet[2216]: E0702 07:54:32.390049 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:32.390210 kubelet[2216]: E0702 07:54:32.390183 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.390210 kubelet[2216]: W0702 07:54:32.390205 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.390406 kubelet[2216]: E0702 07:54:32.390389 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.390406 kubelet[2216]: W0702 07:54:32.390403 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.390489 kubelet[2216]: E0702 07:54:32.390412 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.390489 kubelet[2216]: E0702 07:54:32.390436 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.390489 kubelet[2216]: E0702 07:54:32.390452 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.390585 kubelet[2216]: E0702 07:54:32.390562 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.390585 kubelet[2216]: W0702 07:54:32.390578 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.390585 kubelet[2216]: E0702 07:54:32.390617 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.390793 kubelet[2216]: E0702 07:54:32.390775 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.390793 kubelet[2216]: W0702 07:54:32.390791 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.390878 kubelet[2216]: E0702 07:54:32.390807 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.390991 kubelet[2216]: E0702 07:54:32.390966 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.390991 kubelet[2216]: W0702 07:54:32.390980 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.391064 kubelet[2216]: E0702 07:54:32.391002 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.391148 kubelet[2216]: E0702 07:54:32.391131 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.391148 kubelet[2216]: W0702 07:54:32.391143 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.391227 kubelet[2216]: E0702 07:54:32.391153 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.391527 kubelet[2216]: E0702 07:54:32.391511 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.391527 kubelet[2216]: W0702 07:54:32.391524 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.391527 kubelet[2216]: E0702 07:54:32.391538 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.391718 kubelet[2216]: E0702 07:54:32.391703 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.391718 kubelet[2216]: W0702 07:54:32.391713 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.391718 kubelet[2216]: E0702 07:54:32.391724 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.391867 kubelet[2216]: E0702 07:54:32.391843 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.391867 kubelet[2216]: W0702 07:54:32.391853 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.391867 kubelet[2216]: E0702 07:54:32.391870 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.391979 kubelet[2216]: E0702 07:54:32.391969 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.391979 kubelet[2216]: W0702 07:54:32.391975 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.392024 kubelet[2216]: E0702 07:54:32.391983 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.392171 kubelet[2216]: E0702 07:54:32.392155 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.392171 kubelet[2216]: W0702 07:54:32.392167 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.392261 kubelet[2216]: E0702 07:54:32.392180 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.394342 env[1302]: time="2024-07-02T07:54:32.394311485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 07:54:32.395254 kubelet[2216]: E0702 07:54:32.395238 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.395320 kubelet[2216]: W0702 07:54:32.395250 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.395320 kubelet[2216]: E0702 07:54:32.395279 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.399295 kubelet[2216]: E0702 07:54:32.398271 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:32.399295 kubelet[2216]: W0702 07:54:32.398283 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:32.399295 kubelet[2216]: E0702 07:54:32.398297 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:32.416546 env[1302]: time="2024-07-02T07:54:32.416437091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tpm99,Uid:07022ee5-533c-4ad9-a85a-d80ce0376160,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad80d2c0c8fd223d244218f079156d4142e8424adf543f8bd347bfd3d05d1e1d\"" Jul 2 07:54:32.416915 kubelet[2216]: E0702 07:54:32.416883 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:32.912000 audit[2727]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2727 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:32.914199 kernel: kauditd_printk_skb: 155 callbacks suppressed Jul 2 07:54:32.914307 kernel: audit: type=1325 audit(1719906872.912:280): table=filter:93 family=2 entries=16 op=nft_register_rule pid=2727 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:32.912000 audit[2727]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff0c7f1a00 a2=0 a3=7fff0c7f19ec items=0 ppid=2389 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:32.921754 kernel: audit: type=1300 audit(1719906872.912:280): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff0c7f1a00 a2=0 a3=7fff0c7f19ec items=0 ppid=2389 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:32.921807 kernel: audit: type=1327 audit(1719906872.912:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:32.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:32.912000 audit[2727]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2727 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:32.912000 audit[2727]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff0c7f1a00 a2=0 a3=0 items=0 ppid=2389 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:32.935132 kernel: audit: type=1325 audit(1719906872.912:281): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2727 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:32.935178 kernel: audit: type=1300 audit(1719906872.912:281): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff0c7f1a00 a2=0 a3=0 items=0 ppid=2389 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:32.935203 kernel: audit: type=1327 audit(1719906872.912:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:32.912000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:33.395305 kubelet[2216]: E0702 07:54:33.395271 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xjtw8" podUID="092f7597-7194-4dd8-8fd0-5b1161264bc5" Jul 2 07:54:34.280326 env[1302]: time="2024-07-02T07:54:34.280284092Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:34.282200 env[1302]: time="2024-07-02T07:54:34.282152795Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:34.283625 env[1302]: time="2024-07-02T07:54:34.283587553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:34.285083 env[1302]: time="2024-07-02T07:54:34.285041222Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:34.285427 env[1302]: time="2024-07-02T07:54:34.285393598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 07:54:34.286694 env[1302]: time="2024-07-02T07:54:34.286671120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 07:54:34.294937 env[1302]: time="2024-07-02T07:54:34.294904403Z" level=info msg="CreateContainer within sandbox \"7bc486c69edfb29c5d17de86f3f94727335331c214c16a850a13c2ba15dbdd91\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 07:54:34.309702 env[1302]: time="2024-07-02T07:54:34.309661626Z" level=info msg="CreateContainer within sandbox \"7bc486c69edfb29c5d17de86f3f94727335331c214c16a850a13c2ba15dbdd91\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b4b791f00d9bca0854aac33f84f0deb471c2d1c0b47f7c3f36f25d778a0b9fac\"" Jul 2 07:54:34.310021 env[1302]: time="2024-07-02T07:54:34.309993106Z" level=info msg="StartContainer for \"b4b791f00d9bca0854aac33f84f0deb471c2d1c0b47f7c3f36f25d778a0b9fac\"" Jul 2 07:54:34.357913 env[1302]: time="2024-07-02T07:54:34.357862514Z" level=info msg="StartContainer for \"b4b791f00d9bca0854aac33f84f0deb471c2d1c0b47f7c3f36f25d778a0b9fac\" returns successfully" Jul 2 07:54:34.444381 kubelet[2216]: E0702 07:54:34.444354 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:34.456087 kubelet[2216]: I0702 07:54:34.456058 2216 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-cdd78fc86-x5kkn" podStartSLOduration=1.563229787 podCreationTimestamp="2024-07-02 07:54:31 +0000 UTC" firstStartedPulling="2024-07-02 07:54:32.393444724 +0000 UTC m=+19.092263400" lastFinishedPulling="2024-07-02 07:54:34.286236554 +0000 UTC m=+20.985055230" observedRunningTime="2024-07-02 07:54:34.455863398 +0000 UTC m=+21.154682074" watchObservedRunningTime="2024-07-02 07:54:34.456021617 +0000 UTC m=+21.154840283" Jul 2 07:54:34.496140 kubelet[2216]: E0702 07:54:34.496102 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.496203 kubelet[2216]: W0702 07:54:34.496137 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.496203 kubelet[2216]: E0702 07:54:34.496166 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.496394 kubelet[2216]: E0702 07:54:34.496371 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.496394 kubelet[2216]: W0702 07:54:34.496384 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.496394 kubelet[2216]: E0702 07:54:34.496396 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.496644 kubelet[2216]: E0702 07:54:34.496617 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.496644 kubelet[2216]: W0702 07:54:34.496633 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.496644 kubelet[2216]: E0702 07:54:34.496654 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.496894 kubelet[2216]: E0702 07:54:34.496879 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.496894 kubelet[2216]: W0702 07:54:34.496886 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.496894 kubelet[2216]: E0702 07:54:34.496895 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.497053 kubelet[2216]: E0702 07:54:34.497039 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.497053 kubelet[2216]: W0702 07:54:34.497048 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.497129 kubelet[2216]: E0702 07:54:34.497063 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.497208 kubelet[2216]: E0702 07:54:34.497195 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.497208 kubelet[2216]: W0702 07:54:34.497204 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.497275 kubelet[2216]: E0702 07:54:34.497212 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.497344 kubelet[2216]: E0702 07:54:34.497333 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.497344 kubelet[2216]: W0702 07:54:34.497341 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.497412 kubelet[2216]: E0702 07:54:34.497350 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.497542 kubelet[2216]: E0702 07:54:34.497509 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.497542 kubelet[2216]: W0702 07:54:34.497538 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.497542 kubelet[2216]: E0702 07:54:34.497562 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.497796 kubelet[2216]: E0702 07:54:34.497773 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.497796 kubelet[2216]: W0702 07:54:34.497783 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.497796 kubelet[2216]: E0702 07:54:34.497792 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.497997 kubelet[2216]: E0702 07:54:34.497978 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.497997 kubelet[2216]: W0702 07:54:34.497983 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.498046 kubelet[2216]: E0702 07:54:34.498001 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.498152 kubelet[2216]: E0702 07:54:34.498135 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.498152 kubelet[2216]: W0702 07:54:34.498145 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.498152 kubelet[2216]: E0702 07:54:34.498154 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.498325 kubelet[2216]: E0702 07:54:34.498308 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.498325 kubelet[2216]: W0702 07:54:34.498319 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.498421 kubelet[2216]: E0702 07:54:34.498332 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.498501 kubelet[2216]: E0702 07:54:34.498487 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.498501 kubelet[2216]: W0702 07:54:34.498496 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.498578 kubelet[2216]: E0702 07:54:34.498505 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.498777 kubelet[2216]: E0702 07:54:34.498756 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.498777 kubelet[2216]: W0702 07:54:34.498766 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.498777 kubelet[2216]: E0702 07:54:34.498776 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.498974 kubelet[2216]: E0702 07:54:34.498956 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.498974 kubelet[2216]: W0702 07:54:34.498967 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.499047 kubelet[2216]: E0702 07:54:34.498980 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.499305 kubelet[2216]: E0702 07:54:34.499288 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.499305 kubelet[2216]: W0702 07:54:34.499302 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.499378 kubelet[2216]: E0702 07:54:34.499319 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.499543 kubelet[2216]: E0702 07:54:34.499527 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.499543 kubelet[2216]: W0702 07:54:34.499537 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.499636 kubelet[2216]: E0702 07:54:34.499552 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.499777 kubelet[2216]: E0702 07:54:34.499762 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.499777 kubelet[2216]: W0702 07:54:34.499775 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.499851 kubelet[2216]: E0702 07:54:34.499794 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.500025 kubelet[2216]: E0702 07:54:34.500010 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.500025 kubelet[2216]: W0702 07:54:34.500024 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.500101 kubelet[2216]: E0702 07:54:34.500046 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.500219 kubelet[2216]: E0702 07:54:34.500207 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.500243 kubelet[2216]: W0702 07:54:34.500217 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.500243 kubelet[2216]: E0702 07:54:34.500234 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.500422 kubelet[2216]: E0702 07:54:34.500406 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.500422 kubelet[2216]: W0702 07:54:34.500418 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.500494 kubelet[2216]: E0702 07:54:34.500438 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.500668 kubelet[2216]: E0702 07:54:34.500653 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.500668 kubelet[2216]: W0702 07:54:34.500665 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.500742 kubelet[2216]: E0702 07:54:34.500686 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.500873 kubelet[2216]: E0702 07:54:34.500859 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.500873 kubelet[2216]: W0702 07:54:34.500871 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.500926 kubelet[2216]: E0702 07:54:34.500891 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.502476 kubelet[2216]: E0702 07:54:34.502452 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.502476 kubelet[2216]: W0702 07:54:34.502466 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.502561 kubelet[2216]: E0702 07:54:34.502486 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.502705 kubelet[2216]: E0702 07:54:34.502689 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.502705 kubelet[2216]: W0702 07:54:34.502699 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.502762 kubelet[2216]: E0702 07:54:34.502715 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.502885 kubelet[2216]: E0702 07:54:34.502865 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.502885 kubelet[2216]: W0702 07:54:34.502879 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.502935 kubelet[2216]: E0702 07:54:34.502899 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.503074 kubelet[2216]: E0702 07:54:34.503062 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.503074 kubelet[2216]: W0702 07:54:34.503072 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.503155 kubelet[2216]: E0702 07:54:34.503086 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.503261 kubelet[2216]: E0702 07:54:34.503248 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.503261 kubelet[2216]: W0702 07:54:34.503256 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.503331 kubelet[2216]: E0702 07:54:34.503289 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.503435 kubelet[2216]: E0702 07:54:34.503423 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.503435 kubelet[2216]: W0702 07:54:34.503431 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.503503 kubelet[2216]: E0702 07:54:34.503441 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.503610 kubelet[2216]: E0702 07:54:34.503583 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.503871 kubelet[2216]: W0702 07:54:34.503854 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.503898 kubelet[2216]: E0702 07:54:34.503874 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.504031 kubelet[2216]: E0702 07:54:34.504014 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.504031 kubelet[2216]: W0702 07:54:34.504022 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.504031 kubelet[2216]: E0702 07:54:34.504034 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.504329 kubelet[2216]: E0702 07:54:34.504310 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.504329 kubelet[2216]: W0702 07:54:34.504326 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.504397 kubelet[2216]: E0702 07:54:34.504347 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:34.504534 kubelet[2216]: E0702 07:54:34.504511 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:34.504534 kubelet[2216]: W0702 07:54:34.504532 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:34.504625 kubelet[2216]: E0702 07:54:34.504545 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.394375 kubelet[2216]: E0702 07:54:35.394322 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xjtw8" podUID="092f7597-7194-4dd8-8fd0-5b1161264bc5" Jul 2 07:54:35.444364 kubelet[2216]: I0702 07:54:35.444321 2216 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 07:54:35.444913 kubelet[2216]: E0702 07:54:35.444880 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:35.487483 env[1302]: time="2024-07-02T07:54:35.487435191Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:35.489641 env[1302]: time="2024-07-02T07:54:35.489604200Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:35.491244 env[1302]: time="2024-07-02T07:54:35.491201887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:35.492795 env[1302]: time="2024-07-02T07:54:35.492767274Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:35.493199 env[1302]: time="2024-07-02T07:54:35.493167790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 07:54:35.494520 env[1302]: time="2024-07-02T07:54:35.494481696Z" level=info msg="CreateContainer within sandbox \"ad80d2c0c8fd223d244218f079156d4142e8424adf543f8bd347bfd3d05d1e1d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 07:54:35.505028 kubelet[2216]: E0702 07:54:35.505006 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.505028 kubelet[2216]: W0702 07:54:35.505024 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.505176 kubelet[2216]: E0702 07:54:35.505044 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.505245 kubelet[2216]: E0702 07:54:35.505218 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.505245 kubelet[2216]: W0702 07:54:35.505227 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.505245 kubelet[2216]: E0702 07:54:35.505237 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.505370 kubelet[2216]: E0702 07:54:35.505340 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.505370 kubelet[2216]: W0702 07:54:35.505345 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.505370 kubelet[2216]: E0702 07:54:35.505353 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.505460 kubelet[2216]: E0702 07:54:35.505450 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.505460 kubelet[2216]: W0702 07:54:35.505457 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.505543 kubelet[2216]: E0702 07:54:35.505466 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.505632 kubelet[2216]: E0702 07:54:35.505583 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.505632 kubelet[2216]: W0702 07:54:35.505609 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.505632 kubelet[2216]: E0702 07:54:35.505618 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.505833 kubelet[2216]: E0702 07:54:35.505805 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.505833 kubelet[2216]: W0702 07:54:35.505822 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.505833 kubelet[2216]: E0702 07:54:35.505833 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.505985 kubelet[2216]: E0702 07:54:35.505973 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.506017 kubelet[2216]: W0702 07:54:35.505991 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.506017 kubelet[2216]: E0702 07:54:35.506001 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.506169 kubelet[2216]: E0702 07:54:35.506140 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.506169 kubelet[2216]: W0702 07:54:35.506163 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.506169 kubelet[2216]: E0702 07:54:35.506172 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.506735 kubelet[2216]: E0702 07:54:35.506722 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.506735 kubelet[2216]: W0702 07:54:35.506731 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.506823 kubelet[2216]: E0702 07:54:35.506741 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.506925 kubelet[2216]: E0702 07:54:35.506909 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.506955 kubelet[2216]: W0702 07:54:35.506925 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.506955 kubelet[2216]: E0702 07:54:35.506947 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.507131 kubelet[2216]: E0702 07:54:35.507112 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.507131 kubelet[2216]: W0702 07:54:35.507122 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.507131 kubelet[2216]: E0702 07:54:35.507131 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.507268 kubelet[2216]: E0702 07:54:35.507255 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.507268 kubelet[2216]: W0702 07:54:35.507263 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.507268 kubelet[2216]: E0702 07:54:35.507272 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.507485 kubelet[2216]: E0702 07:54:35.507470 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.507485 kubelet[2216]: W0702 07:54:35.507481 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.507557 kubelet[2216]: E0702 07:54:35.507494 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.507714 kubelet[2216]: E0702 07:54:35.507702 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.507714 kubelet[2216]: W0702 07:54:35.507711 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.507769 kubelet[2216]: E0702 07:54:35.507722 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.507881 kubelet[2216]: E0702 07:54:35.507870 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.507881 kubelet[2216]: W0702 07:54:35.507879 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.507931 kubelet[2216]: E0702 07:54:35.507888 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.508120 kubelet[2216]: E0702 07:54:35.508106 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.508120 kubelet[2216]: W0702 07:54:35.508115 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.508120 kubelet[2216]: E0702 07:54:35.508124 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.508311 kubelet[2216]: E0702 07:54:35.508298 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.508311 kubelet[2216]: W0702 07:54:35.508307 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.508364 kubelet[2216]: E0702 07:54:35.508319 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.508500 kubelet[2216]: E0702 07:54:35.508489 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.508500 kubelet[2216]: W0702 07:54:35.508500 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.508549 kubelet[2216]: E0702 07:54:35.508511 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.508697 kubelet[2216]: E0702 07:54:35.508683 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.508697 kubelet[2216]: W0702 07:54:35.508692 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.508771 kubelet[2216]: E0702 07:54:35.508705 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.508885 kubelet[2216]: E0702 07:54:35.508875 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.508885 kubelet[2216]: W0702 07:54:35.508884 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.508934 kubelet[2216]: E0702 07:54:35.508897 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.509062 kubelet[2216]: E0702 07:54:35.509051 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.509062 kubelet[2216]: W0702 07:54:35.509060 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.509111 kubelet[2216]: E0702 07:54:35.509071 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.509267 kubelet[2216]: E0702 07:54:35.509252 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.509267 kubelet[2216]: W0702 07:54:35.509260 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.509340 env[1302]: time="2024-07-02T07:54:35.509239711Z" level=info msg="CreateContainer within sandbox \"ad80d2c0c8fd223d244218f079156d4142e8424adf543f8bd347bfd3d05d1e1d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b58517a9cbff0250473554d6db9e1acbe4f266b2a554743d6478a0b7f5c7d1e5\"" Jul 2 07:54:35.509375 kubelet[2216]: E0702 07:54:35.509274 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.509570 kubelet[2216]: E0702 07:54:35.509556 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.509570 kubelet[2216]: W0702 07:54:35.509565 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.509652 kubelet[2216]: E0702 07:54:35.509607 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.509721 kubelet[2216]: E0702 07:54:35.509707 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.509721 kubelet[2216]: W0702 07:54:35.509718 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.509777 env[1302]: time="2024-07-02T07:54:35.509728411Z" level=info msg="StartContainer for \"b58517a9cbff0250473554d6db9e1acbe4f266b2a554743d6478a0b7f5c7d1e5\"" Jul 2 07:54:35.509805 kubelet[2216]: E0702 07:54:35.509774 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.509871 kubelet[2216]: E0702 07:54:35.509860 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.509871 kubelet[2216]: W0702 07:54:35.509870 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.509915 kubelet[2216]: E0702 07:54:35.509879 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.510084 kubelet[2216]: E0702 07:54:35.510074 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.510084 kubelet[2216]: W0702 07:54:35.510083 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.510135 kubelet[2216]: E0702 07:54:35.510096 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.510255 kubelet[2216]: E0702 07:54:35.510243 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.510255 kubelet[2216]: W0702 07:54:35.510253 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.510299 kubelet[2216]: E0702 07:54:35.510263 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.510401 kubelet[2216]: E0702 07:54:35.510391 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.510401 kubelet[2216]: W0702 07:54:35.510400 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.510448 kubelet[2216]: E0702 07:54:35.510408 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.510699 kubelet[2216]: E0702 07:54:35.510684 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.510699 kubelet[2216]: W0702 07:54:35.510695 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.510775 kubelet[2216]: E0702 07:54:35.510709 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.510894 kubelet[2216]: E0702 07:54:35.510846 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.510894 kubelet[2216]: W0702 07:54:35.510855 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.510894 kubelet[2216]: E0702 07:54:35.510863 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.510993 kubelet[2216]: E0702 07:54:35.510958 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.510993 kubelet[2216]: W0702 07:54:35.510963 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.510993 kubelet[2216]: E0702 07:54:35.510971 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.511095 kubelet[2216]: E0702 07:54:35.511083 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.511095 kubelet[2216]: W0702 07:54:35.511092 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.511147 kubelet[2216]: E0702 07:54:35.511101 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.511484 kubelet[2216]: E0702 07:54:35.511465 2216 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:54:35.511484 kubelet[2216]: W0702 07:54:35.511474 2216 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:54:35.511484 kubelet[2216]: E0702 07:54:35.511485 2216 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:54:35.557155 env[1302]: time="2024-07-02T07:54:35.557114958Z" level=info msg="StartContainer for \"b58517a9cbff0250473554d6db9e1acbe4f266b2a554743d6478a0b7f5c7d1e5\" returns successfully" Jul 2 07:54:35.882568 env[1302]: time="2024-07-02T07:54:35.882515536Z" level=info msg="shim disconnected" id=b58517a9cbff0250473554d6db9e1acbe4f266b2a554743d6478a0b7f5c7d1e5 Jul 2 07:54:35.882568 env[1302]: time="2024-07-02T07:54:35.882569765Z" level=warning msg="cleaning up after shim disconnected" id=b58517a9cbff0250473554d6db9e1acbe4f266b2a554743d6478a0b7f5c7d1e5 namespace=k8s.io Jul 2 07:54:35.882796 env[1302]: time="2024-07-02T07:54:35.882581360Z" level=info msg="cleaning up dead shim" Jul 2 07:54:35.888233 env[1302]: time="2024-07-02T07:54:35.888207105Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:54:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2892 runtime=io.containerd.runc.v2\n" Jul 2 07:54:35.988000 audit[2910]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=2910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:35.988000 audit[2910]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd0dff1a40 a2=0 a3=7ffd0dff1a2c items=0 ppid=2389 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:35.996878 kernel: audit: type=1325 audit(1719906875.988:282): table=filter:95 family=2 entries=15 op=nft_register_rule pid=2910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:35.997016 kernel: audit: type=1300 audit(1719906875.988:282): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd0dff1a40 a2=0 a3=7ffd0dff1a2c items=0 ppid=2389 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:35.988000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:35.999777 kernel: audit: type=1327 audit(1719906875.988:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:35.989000 audit[2910]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=2910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:36.005675 kernel: audit: type=1325 audit(1719906875.989:283): table=nat:96 family=2 entries=19 op=nft_register_chain pid=2910 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:35.989000 audit[2910]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd0dff1a40 a2=0 a3=7ffd0dff1a2c items=0 ppid=2389 pid=2910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:35.989000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:36.290454 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b58517a9cbff0250473554d6db9e1acbe4f266b2a554743d6478a0b7f5c7d1e5-rootfs.mount: Deactivated successfully. Jul 2 07:54:36.446538 kubelet[2216]: E0702 07:54:36.446516 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:36.446850 kubelet[2216]: E0702 07:54:36.446637 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:36.447480 env[1302]: time="2024-07-02T07:54:36.447443142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 07:54:37.394047 kubelet[2216]: E0702 07:54:37.394019 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xjtw8" podUID="092f7597-7194-4dd8-8fd0-5b1161264bc5" Jul 2 07:54:37.448653 kubelet[2216]: E0702 07:54:37.448631 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:39.394273 kubelet[2216]: E0702 07:54:39.394240 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xjtw8" podUID="092f7597-7194-4dd8-8fd0-5b1161264bc5" Jul 2 07:54:40.788850 env[1302]: time="2024-07-02T07:54:40.788802430Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:40.790677 env[1302]: time="2024-07-02T07:54:40.790640320Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:40.792324 env[1302]: time="2024-07-02T07:54:40.792294478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:40.793786 env[1302]: time="2024-07-02T07:54:40.793749649Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:40.794156 env[1302]: time="2024-07-02T07:54:40.794127097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 07:54:40.795458 env[1302]: time="2024-07-02T07:54:40.795424271Z" level=info msg="CreateContainer within sandbox \"ad80d2c0c8fd223d244218f079156d4142e8424adf543f8bd347bfd3d05d1e1d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 07:54:40.809856 env[1302]: time="2024-07-02T07:54:40.809820817Z" level=info msg="CreateContainer within sandbox \"ad80d2c0c8fd223d244218f079156d4142e8424adf543f8bd347bfd3d05d1e1d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"79c7ee926c59c773231a8396df4c7379be4a39a0be101b81a36e1453a3ff23ec\"" Jul 2 07:54:40.811408 env[1302]: time="2024-07-02T07:54:40.810341060Z" level=info msg="StartContainer for \"79c7ee926c59c773231a8396df4c7379be4a39a0be101b81a36e1453a3ff23ec\"" Jul 2 07:54:40.852943 env[1302]: time="2024-07-02T07:54:40.852894151Z" level=info msg="StartContainer for \"79c7ee926c59c773231a8396df4c7379be4a39a0be101b81a36e1453a3ff23ec\" returns successfully" Jul 2 07:54:41.393897 kubelet[2216]: E0702 07:54:41.393853 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xjtw8" podUID="092f7597-7194-4dd8-8fd0-5b1161264bc5" Jul 2 07:54:41.455721 kubelet[2216]: E0702 07:54:41.455692 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:41.952627 env[1302]: time="2024-07-02T07:54:41.952553253Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:54:41.967806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79c7ee926c59c773231a8396df4c7379be4a39a0be101b81a36e1453a3ff23ec-rootfs.mount: Deactivated successfully. Jul 2 07:54:42.036762 kubelet[2216]: I0702 07:54:42.036726 2216 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 07:54:42.094959 env[1302]: time="2024-07-02T07:54:42.094909310Z" level=info msg="shim disconnected" id=79c7ee926c59c773231a8396df4c7379be4a39a0be101b81a36e1453a3ff23ec Jul 2 07:54:42.094959 env[1302]: time="2024-07-02T07:54:42.094956169Z" level=warning msg="cleaning up after shim disconnected" id=79c7ee926c59c773231a8396df4c7379be4a39a0be101b81a36e1453a3ff23ec namespace=k8s.io Jul 2 07:54:42.094959 env[1302]: time="2024-07-02T07:54:42.094964306Z" level=info msg="cleaning up dead shim" Jul 2 07:54:42.102330 env[1302]: time="2024-07-02T07:54:42.102288381Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:54:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2964 runtime=io.containerd.runc.v2\n" Jul 2 07:54:42.106391 kubelet[2216]: I0702 07:54:42.106341 2216 topology_manager.go:215] "Topology Admit Handler" podUID="27cc9440-016f-4904-acdc-365f806c13c4" podNamespace="kube-system" podName="coredns-5dd5756b68-4t8xt" Jul 2 07:54:42.111421 kubelet[2216]: I0702 07:54:42.111098 2216 topology_manager.go:215] "Topology Admit Handler" podUID="be490d13-b47c-4e6c-9d39-8c2a55153f51" podNamespace="calico-system" podName="calico-kube-controllers-6d46f8b8c-b7tbb" Jul 2 07:54:42.111421 kubelet[2216]: I0702 07:54:42.111232 2216 topology_manager.go:215] "Topology Admit Handler" podUID="aca79512-22d0-4402-8f15-275b2ea8d5f5" podNamespace="kube-system" podName="coredns-5dd5756b68-tddcs" Jul 2 07:54:42.248554 kubelet[2216]: I0702 07:54:42.248123 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xm7c\" (UniqueName: \"kubernetes.io/projected/be490d13-b47c-4e6c-9d39-8c2a55153f51-kube-api-access-7xm7c\") pod \"calico-kube-controllers-6d46f8b8c-b7tbb\" (UID: \"be490d13-b47c-4e6c-9d39-8c2a55153f51\") " pod="calico-system/calico-kube-controllers-6d46f8b8c-b7tbb" Jul 2 07:54:42.248554 kubelet[2216]: I0702 07:54:42.248160 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aca79512-22d0-4402-8f15-275b2ea8d5f5-config-volume\") pod \"coredns-5dd5756b68-tddcs\" (UID: \"aca79512-22d0-4402-8f15-275b2ea8d5f5\") " pod="kube-system/coredns-5dd5756b68-tddcs" Jul 2 07:54:42.248554 kubelet[2216]: I0702 07:54:42.248178 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be490d13-b47c-4e6c-9d39-8c2a55153f51-tigera-ca-bundle\") pod \"calico-kube-controllers-6d46f8b8c-b7tbb\" (UID: \"be490d13-b47c-4e6c-9d39-8c2a55153f51\") " pod="calico-system/calico-kube-controllers-6d46f8b8c-b7tbb" Jul 2 07:54:42.248554 kubelet[2216]: I0702 07:54:42.248201 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27cc9440-016f-4904-acdc-365f806c13c4-config-volume\") pod \"coredns-5dd5756b68-4t8xt\" (UID: \"27cc9440-016f-4904-acdc-365f806c13c4\") " pod="kube-system/coredns-5dd5756b68-4t8xt" Jul 2 07:54:42.248554 kubelet[2216]: I0702 07:54:42.248218 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnxx7\" (UniqueName: \"kubernetes.io/projected/27cc9440-016f-4904-acdc-365f806c13c4-kube-api-access-fnxx7\") pod \"coredns-5dd5756b68-4t8xt\" (UID: \"27cc9440-016f-4904-acdc-365f806c13c4\") " pod="kube-system/coredns-5dd5756b68-4t8xt" Jul 2 07:54:42.248816 kubelet[2216]: I0702 07:54:42.248236 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s86vl\" (UniqueName: \"kubernetes.io/projected/aca79512-22d0-4402-8f15-275b2ea8d5f5-kube-api-access-s86vl\") pod \"coredns-5dd5756b68-tddcs\" (UID: \"aca79512-22d0-4402-8f15-275b2ea8d5f5\") " pod="kube-system/coredns-5dd5756b68-tddcs" Jul 2 07:54:42.412480 kubelet[2216]: E0702 07:54:42.412439 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:42.412954 env[1302]: time="2024-07-02T07:54:42.412907881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4t8xt,Uid:27cc9440-016f-4904-acdc-365f806c13c4,Namespace:kube-system,Attempt:0,}" Jul 2 07:54:42.417232 kubelet[2216]: E0702 07:54:42.417216 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:42.417618 env[1302]: time="2024-07-02T07:54:42.417585209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tddcs,Uid:aca79512-22d0-4402-8f15-275b2ea8d5f5,Namespace:kube-system,Attempt:0,}" Jul 2 07:54:42.417744 env[1302]: time="2024-07-02T07:54:42.417698479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d46f8b8c-b7tbb,Uid:be490d13-b47c-4e6c-9d39-8c2a55153f51,Namespace:calico-system,Attempt:0,}" Jul 2 07:54:42.459369 kubelet[2216]: E0702 07:54:42.459340 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:42.460027 env[1302]: time="2024-07-02T07:54:42.459965439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 07:54:42.698368 env[1302]: time="2024-07-02T07:54:42.698295898Z" level=error msg="Failed to destroy network for sandbox \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:42.698688 env[1302]: time="2024-07-02T07:54:42.698659500Z" level=error msg="encountered an error cleaning up failed sandbox \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:42.698724 env[1302]: time="2024-07-02T07:54:42.698704846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d46f8b8c-b7tbb,Uid:be490d13-b47c-4e6c-9d39-8c2a55153f51,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:42.698925 kubelet[2216]: E0702 07:54:42.698897 2216 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:42.698986 kubelet[2216]: E0702 07:54:42.698962 2216 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d46f8b8c-b7tbb" Jul 2 07:54:42.698986 kubelet[2216]: E0702 07:54:42.698980 2216 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d46f8b8c-b7tbb" Jul 2 07:54:42.699040 kubelet[2216]: E0702 07:54:42.699027 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d46f8b8c-b7tbb_calico-system(be490d13-b47c-4e6c-9d39-8c2a55153f51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d46f8b8c-b7tbb_calico-system(be490d13-b47c-4e6c-9d39-8c2a55153f51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d46f8b8c-b7tbb" podUID="be490d13-b47c-4e6c-9d39-8c2a55153f51" Jul 2 07:54:42.699694 env[1302]: time="2024-07-02T07:54:42.699634138Z" level=error msg="Failed to destroy network for sandbox \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:42.699967 env[1302]: time="2024-07-02T07:54:42.699939837Z" level=error msg="encountered an error cleaning up failed sandbox \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:42.700029 env[1302]: time="2024-07-02T07:54:42.699988791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4t8xt,Uid:27cc9440-016f-4904-acdc-365f806c13c4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:42.700181 kubelet[2216]: E0702 07:54:42.700165 2216 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:42.700235 kubelet[2216]: E0702 07:54:42.700198 2216 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-4t8xt" Jul 2 07:54:42.700235 kubelet[2216]: E0702 07:54:42.700218 2216 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-4t8xt" Jul 2 07:54:42.700284 kubelet[2216]: E0702 07:54:42.700249 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-4t8xt_kube-system(27cc9440-016f-4904-acdc-365f806c13c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-4t8xt_kube-system(27cc9440-016f-4904-acdc-365f806c13c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-4t8xt" podUID="27cc9440-016f-4904-acdc-365f806c13c4" Jul 2 07:54:42.707833 env[1302]: time="2024-07-02T07:54:42.707784517Z" level=error msg="Failed to destroy network for sandbox \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:42.708097 env[1302]: time="2024-07-02T07:54:42.708068700Z" level=error msg="encountered an error cleaning up failed sandbox \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:42.708131 env[1302]: time="2024-07-02T07:54:42.708108435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tddcs,Uid:aca79512-22d0-4402-8f15-275b2ea8d5f5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:42.708318 kubelet[2216]: E0702 07:54:42.708301 2216 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:42.708352 kubelet[2216]: E0702 07:54:42.708336 2216 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-tddcs" Jul 2 07:54:42.708381 kubelet[2216]: E0702 07:54:42.708354 2216 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-tddcs" Jul 2 07:54:42.708407 kubelet[2216]: E0702 07:54:42.708394 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-tddcs_kube-system(aca79512-22d0-4402-8f15-275b2ea8d5f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-tddcs_kube-system(aca79512-22d0-4402-8f15-275b2ea8d5f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-tddcs" podUID="aca79512-22d0-4402-8f15-275b2ea8d5f5" Jul 2 07:54:42.968158 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26-shm.mount: Deactivated successfully. Jul 2 07:54:42.968274 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257-shm.mount: Deactivated successfully. Jul 2 07:54:42.968367 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632-shm.mount: Deactivated successfully. Jul 2 07:54:43.397374 env[1302]: time="2024-07-02T07:54:43.397212051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjtw8,Uid:092f7597-7194-4dd8-8fd0-5b1161264bc5,Namespace:calico-system,Attempt:0,}" Jul 2 07:54:43.446730 env[1302]: time="2024-07-02T07:54:43.446665765Z" level=error msg="Failed to destroy network for sandbox \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:43.447044 env[1302]: time="2024-07-02T07:54:43.447009131Z" level=error msg="encountered an error cleaning up failed sandbox \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:43.447086 env[1302]: time="2024-07-02T07:54:43.447056041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjtw8,Uid:092f7597-7194-4dd8-8fd0-5b1161264bc5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:43.447335 kubelet[2216]: E0702 07:54:43.447288 2216 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:43.447335 kubelet[2216]: E0702 07:54:43.447343 2216 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjtw8" Jul 2 07:54:43.447696 kubelet[2216]: E0702 07:54:43.447362 2216 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xjtw8" Jul 2 07:54:43.447696 kubelet[2216]: E0702 07:54:43.447416 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xjtw8_calico-system(092f7597-7194-4dd8-8fd0-5b1161264bc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xjtw8_calico-system(092f7597-7194-4dd8-8fd0-5b1161264bc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xjtw8" podUID="092f7597-7194-4dd8-8fd0-5b1161264bc5" Jul 2 07:54:43.448834 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03-shm.mount: Deactivated successfully. Jul 2 07:54:43.461657 kubelet[2216]: I0702 07:54:43.461622 2216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:54:43.462173 env[1302]: time="2024-07-02T07:54:43.462148534Z" level=info msg="StopPodSandbox for \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\"" Jul 2 07:54:43.462459 kubelet[2216]: I0702 07:54:43.462438 2216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:54:43.462891 env[1302]: time="2024-07-02T07:54:43.462850449Z" level=info msg="StopPodSandbox for \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\"" Jul 2 07:54:43.464204 kubelet[2216]: I0702 07:54:43.464175 2216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:54:43.464633 env[1302]: time="2024-07-02T07:54:43.464587313Z" level=info msg="StopPodSandbox for \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\"" Jul 2 07:54:43.465928 kubelet[2216]: I0702 07:54:43.465904 2216 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:54:43.466273 env[1302]: time="2024-07-02T07:54:43.466243827Z" level=info msg="StopPodSandbox for \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\"" Jul 2 07:54:43.493500 env[1302]: time="2024-07-02T07:54:43.493447949Z" level=error msg="StopPodSandbox for \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\" failed" error="failed to destroy network for sandbox \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:43.496982 kubelet[2216]: E0702 07:54:43.496830 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:54:43.496982 kubelet[2216]: E0702 07:54:43.496902 2216 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26"} Jul 2 07:54:43.496982 kubelet[2216]: E0702 07:54:43.496936 2216 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be490d13-b47c-4e6c-9d39-8c2a55153f51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 07:54:43.496982 kubelet[2216]: E0702 07:54:43.496964 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be490d13-b47c-4e6c-9d39-8c2a55153f51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d46f8b8c-b7tbb" podUID="be490d13-b47c-4e6c-9d39-8c2a55153f51" Jul 2 07:54:43.503303 env[1302]: time="2024-07-02T07:54:43.503267679Z" level=error msg="StopPodSandbox for \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\" failed" error="failed to destroy network for sandbox \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:43.503653 kubelet[2216]: E0702 07:54:43.503628 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:54:43.503724 kubelet[2216]: E0702 07:54:43.503675 2216 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257"} Jul 2 07:54:43.503724 kubelet[2216]: E0702 07:54:43.503708 2216 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"27cc9440-016f-4904-acdc-365f806c13c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 07:54:43.503810 kubelet[2216]: E0702 07:54:43.503733 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"27cc9440-016f-4904-acdc-365f806c13c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-4t8xt" podUID="27cc9440-016f-4904-acdc-365f806c13c4" Jul 2 07:54:43.504358 env[1302]: time="2024-07-02T07:54:43.504305004Z" level=error msg="StopPodSandbox for \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\" failed" error="failed to destroy network for sandbox \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:43.504490 kubelet[2216]: E0702 07:54:43.504462 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:54:43.504490 kubelet[2216]: E0702 07:54:43.504486 2216 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632"} Jul 2 07:54:43.504573 kubelet[2216]: E0702 07:54:43.504514 2216 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aca79512-22d0-4402-8f15-275b2ea8d5f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 07:54:43.504573 kubelet[2216]: E0702 07:54:43.504553 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aca79512-22d0-4402-8f15-275b2ea8d5f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-tddcs" podUID="aca79512-22d0-4402-8f15-275b2ea8d5f5" Jul 2 07:54:43.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.138:22-10.0.0.1:46564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:43.510902 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:46564.service. Jul 2 07:54:43.512128 kernel: kauditd_printk_skb: 2 callbacks suppressed Jul 2 07:54:43.512252 kernel: audit: type=1130 audit(1719906883.510:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.138:22-10.0.0.1:46564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:43.517170 env[1302]: time="2024-07-02T07:54:43.517008293Z" level=error msg="StopPodSandbox for \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\" failed" error="failed to destroy network for sandbox \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:54:43.517251 kubelet[2216]: E0702 07:54:43.517220 2216 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:54:43.517296 kubelet[2216]: E0702 07:54:43.517257 2216 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03"} Jul 2 07:54:43.517296 kubelet[2216]: E0702 07:54:43.517289 2216 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"092f7597-7194-4dd8-8fd0-5b1161264bc5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 07:54:43.517402 kubelet[2216]: E0702 07:54:43.517317 2216 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"092f7597-7194-4dd8-8fd0-5b1161264bc5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xjtw8" podUID="092f7597-7194-4dd8-8fd0-5b1161264bc5" Jul 2 07:54:43.552000 audit[3226]: USER_ACCT pid=3226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:43.552785 sshd[3226]: Accepted publickey for core from 10.0.0.1 port 46564 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:54:43.554624 sshd[3226]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:43.553000 audit[3226]: CRED_ACQ pid=3226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:43.558460 systemd-logind[1289]: New session 8 of user core. Jul 2 07:54:43.559230 systemd[1]: Started session-8.scope. Jul 2 07:54:43.561175 kernel: audit: type=1101 audit(1719906883.552:285): pid=3226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:43.561222 kernel: audit: type=1103 audit(1719906883.553:286): pid=3226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:43.561241 kernel: audit: type=1006 audit(1719906883.553:287): pid=3226 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jul 2 07:54:43.553000 audit[3226]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc70a4bf60 a2=3 a3=0 items=0 ppid=1 pid=3226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:43.567900 kernel: audit: type=1300 audit(1719906883.553:287): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc70a4bf60 a2=3 a3=0 items=0 ppid=1 pid=3226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:43.567942 kernel: audit: type=1327 audit(1719906883.553:287): proctitle=737368643A20636F7265205B707269765D Jul 2 07:54:43.553000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:54:43.569345 kernel: audit: type=1105 audit(1719906883.563:288): pid=3226 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:43.563000 audit[3226]: USER_START pid=3226 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:43.573617 kernel: audit: type=1103 audit(1719906883.564:289): pid=3230 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:43.564000 audit[3230]: CRED_ACQ pid=3230 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:43.683349 sshd[3226]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:43.684000 audit[3226]: USER_END pid=3226 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:43.687817 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:46564.service: Deactivated successfully. Jul 2 07:54:43.688548 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 07:54:43.696472 systemd-logind[1289]: Session 8 logged out. Waiting for processes to exit. Jul 2 07:54:43.699409 systemd-logind[1289]: Removed session 8. Jul 2 07:54:43.684000 audit[3226]: CRED_DISP pid=3226 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:43.722122 kernel: audit: type=1106 audit(1719906883.684:290): pid=3226 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:43.722199 kernel: audit: type=1104 audit(1719906883.684:291): pid=3226 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:43.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.138:22-10.0.0.1:46564 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:47.716325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520877896.mount: Deactivated successfully. Jul 2 07:54:47.788838 env[1302]: time="2024-07-02T07:54:47.788796975Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:47.791115 env[1302]: time="2024-07-02T07:54:47.791083195Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:47.792700 env[1302]: time="2024-07-02T07:54:47.792667576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:47.794021 env[1302]: time="2024-07-02T07:54:47.793994691Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:47.794326 env[1302]: time="2024-07-02T07:54:47.794292263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 07:54:47.804313 env[1302]: time="2024-07-02T07:54:47.804283227Z" level=info msg="CreateContainer within sandbox \"ad80d2c0c8fd223d244218f079156d4142e8424adf543f8bd347bfd3d05d1e1d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 07:54:47.818665 env[1302]: time="2024-07-02T07:54:47.818623023Z" level=info msg="CreateContainer within sandbox \"ad80d2c0c8fd223d244218f079156d4142e8424adf543f8bd347bfd3d05d1e1d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ffce6fea16fde8c6a83ad7a57d4c66e7f50e3d417ecfa58a7dd6fc227b92f951\"" Jul 2 07:54:47.819044 env[1302]: time="2024-07-02T07:54:47.819015753Z" level=info msg="StartContainer for \"ffce6fea16fde8c6a83ad7a57d4c66e7f50e3d417ecfa58a7dd6fc227b92f951\"" Jul 2 07:54:47.862517 env[1302]: time="2024-07-02T07:54:47.862477344Z" level=info msg="StartContainer for \"ffce6fea16fde8c6a83ad7a57d4c66e7f50e3d417ecfa58a7dd6fc227b92f951\" returns successfully" Jul 2 07:54:47.923985 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 07:54:47.924133 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 07:54:48.476390 kubelet[2216]: E0702 07:54:48.476365 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:48.487922 kubelet[2216]: I0702 07:54:48.487881 2216 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-tpm99" podStartSLOduration=1.110752455 podCreationTimestamp="2024-07-02 07:54:32 +0000 UTC" firstStartedPulling="2024-07-02 07:54:32.417440417 +0000 UTC m=+19.116259093" lastFinishedPulling="2024-07-02 07:54:47.79453011 +0000 UTC m=+34.493348786" observedRunningTime="2024-07-02 07:54:48.487604392 +0000 UTC m=+35.186423068" watchObservedRunningTime="2024-07-02 07:54:48.487842148 +0000 UTC m=+35.186660824" Jul 2 07:54:48.687377 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:46580.service. Jul 2 07:54:48.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.138:22-10.0.0.1:46580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:48.688343 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:54:48.688412 kernel: audit: type=1130 audit(1719906888.686:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.138:22-10.0.0.1:46580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:48.728000 audit[3331]: USER_ACCT pid=3331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:48.729675 sshd[3331]: Accepted publickey for core from 10.0.0.1 port 46580 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:54:48.732118 sshd[3331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:48.731000 audit[3331]: CRED_ACQ pid=3331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:48.735326 systemd-logind[1289]: New session 9 of user core. Jul 2 07:54:48.736132 systemd[1]: Started session-9.scope. Jul 2 07:54:48.737016 kernel: audit: type=1101 audit(1719906888.728:294): pid=3331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:48.737070 kernel: audit: type=1103 audit(1719906888.731:295): pid=3331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:48.737092 kernel: audit: type=1006 audit(1719906888.731:296): pid=3331 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jul 2 07:54:48.731000 audit[3331]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3267c7e0 a2=3 a3=0 items=0 ppid=1 pid=3331 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:48.743331 kernel: audit: type=1300 audit(1719906888.731:296): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3267c7e0 a2=3 a3=0 items=0 ppid=1 pid=3331 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:48.743382 kernel: audit: type=1327 audit(1719906888.731:296): proctitle=737368643A20636F7265205B707269765D Jul 2 07:54:48.731000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:54:48.744649 kernel: audit: type=1105 audit(1719906888.739:297): pid=3331 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:48.739000 audit[3331]: USER_START pid=3331 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:48.748826 kernel: audit: type=1103 audit(1719906888.740:298): pid=3334 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:48.740000 audit[3334]: CRED_ACQ pid=3334 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:48.842099 sshd[3331]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:48.842000 audit[3331]: USER_END pid=3331 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:48.844125 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:46580.service: Deactivated successfully. Jul 2 07:54:48.844845 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 07:54:48.846131 systemd-logind[1289]: Session 9 logged out. Waiting for processes to exit. Jul 2 07:54:48.846988 systemd-logind[1289]: Removed session 9. Jul 2 07:54:48.842000 audit[3331]: CRED_DISP pid=3331 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:48.850698 kernel: audit: type=1106 audit(1719906888.842:299): pid=3331 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:48.850753 kernel: audit: type=1104 audit(1719906888.842:300): pid=3331 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:48.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.138:22-10.0.0.1:46580 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:49.378000 audit[3391]: AVC avc: denied { write } for pid=3391 comm="tee" name="fd" dev="proc" ino=26680 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:54:49.378000 audit[3391]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc7122ca2f a2=241 a3=1b6 items=1 ppid=3363 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.378000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 2 07:54:49.378000 audit: PATH item=0 name="/dev/fd/63" inode=24036 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:54:49.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:54:49.383000 audit[3386]: AVC avc: denied { write } for pid=3386 comm="tee" name="fd" dev="proc" ino=24707 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:54:49.383000 audit[3386]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc41d44a30 a2=241 a3=1b6 items=1 ppid=3354 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.383000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 2 07:54:49.383000 audit: PATH item=0 name="/dev/fd/63" inode=24700 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:54:49.383000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:54:49.393000 audit[3408]: AVC avc: denied { write } for pid=3408 comm="tee" name="fd" dev="proc" ino=25701 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:54:49.393000 audit[3408]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc7ddeea2e a2=241 a3=1b6 items=1 ppid=3358 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.393000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 2 07:54:49.393000 audit: PATH item=0 name="/dev/fd/63" inode=25694 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:54:49.393000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:54:49.398000 audit[3420]: AVC avc: denied { write } for pid=3420 comm="tee" name="fd" dev="proc" ino=24714 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:54:49.398000 audit[3429]: AVC avc: denied { write } for pid=3429 comm="tee" name="fd" dev="proc" ino=24054 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:54:49.398000 audit[3429]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff0cb52a2e a2=241 a3=1b6 items=1 ppid=3355 pid=3429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.398000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 2 07:54:49.398000 audit: PATH item=0 name="/dev/fd/63" inode=26687 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:54:49.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:54:49.398000 audit[3398]: AVC avc: denied { write } for pid=3398 comm="tee" name="fd" dev="proc" ino=26690 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:54:49.398000 audit[3398]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffffb1da2e a2=241 a3=1b6 items=1 ppid=3359 pid=3398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.398000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 2 07:54:49.398000 audit: PATH item=0 name="/dev/fd/63" inode=24039 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:54:49.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:54:49.398000 audit[3420]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcfdcfba1e a2=241 a3=1b6 items=1 ppid=3372 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.398000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 2 07:54:49.398000 audit: PATH item=0 name="/dev/fd/63" inode=24711 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:54:49.398000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:54:49.412000 audit[3427]: AVC avc: denied { write } for pid=3427 comm="tee" name="fd" dev="proc" ino=26695 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:54:49.412000 audit[3427]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd00feea1f a2=241 a3=1b6 items=1 ppid=3362 pid=3427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.412000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 2 07:54:49.412000 audit: PATH item=0 name="/dev/fd/63" inode=26684 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:54:49.412000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:54:49.478352 kubelet[2216]: E0702 07:54:49.478054 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:49.842072 systemd-networkd[1074]: vxlan.calico: Link UP Jul 2 07:54:49.842080 systemd-networkd[1074]: vxlan.calico: Gained carrier Jul 2 07:54:49.852000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.852000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.852000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.852000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.852000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.852000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.852000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.852000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.852000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.852000 audit: BPF prog-id=10 op=LOAD Jul 2 07:54:49.852000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeab4f5c90 a2=70 a3=7ff02b667000 items=0 ppid=3357 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.852000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:54:49.852000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit: BPF prog-id=11 op=LOAD Jul 2 07:54:49.853000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffeab4f5c90 a2=70 a3=6f items=0 ppid=3357 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.853000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:54:49.853000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffeab4f5c40 a2=70 a3=7ffeab4f5c90 items=0 ppid=3357 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.853000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.853000 audit: BPF prog-id=12 op=LOAD Jul 2 07:54:49.853000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffeab4f5c20 a2=70 a3=7ffeab4f5c90 items=0 ppid=3357 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.853000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:54:49.854000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:54:49.854000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.854000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeab4f5d00 a2=70 a3=0 items=0 ppid=3357 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:54:49.854000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.854000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeab4f5cf0 a2=70 a3=0 items=0 ppid=3357 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:54:49.854000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.854000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffeab4f5c60 a2=70 a3=0 items=0 ppid=3357 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.854000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:54:49.855000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.855000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffeab4f5d30 a2=70 a3=175b8b0 items=0 ppid=3357 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.855000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:54:49.855000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.855000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffeab4f5d30 a2=70 a3=1758880 items=0 ppid=3357 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.855000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:54:49.857000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.857000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.857000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.857000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.857000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.857000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.857000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.857000 audit[3526]: AVC avc: denied { perfmon } for pid=3526 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.857000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.857000 audit[3526]: AVC avc: denied { bpf } for pid=3526 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.857000 audit: BPF prog-id=13 op=LOAD Jul 2 07:54:49.857000 audit[3526]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeab4f5c50 a2=70 a3=0 items=0 ppid=3357 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.857000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:54:49.859000 audit[3534]: AVC avc: denied { bpf } for pid=3534 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.859000 audit[3534]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff1b0844a0 a2=70 a3=208 items=0 ppid=3357 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.859000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 2 07:54:49.859000 audit[3534]: AVC avc: denied { bpf } for pid=3534 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 2 07:54:49.859000 audit[3534]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fff1b084370 a2=70 a3=3 items=0 ppid=3357 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.859000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 2 07:54:49.864000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:54:49.904000 audit[3557]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3557 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:54:49.904000 audit[3557]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc06ccdf20 a2=0 a3=7ffc06ccdf0c items=0 ppid=3357 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.904000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:54:49.908000 audit[3556]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=3556 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:54:49.908000 audit[3556]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc0cd1e120 a2=0 a3=7ffc0cd1e10c items=0 ppid=3357 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.908000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:54:49.909000 audit[3555]: NETFILTER_CFG table=raw:99 family=2 entries=19 op=nft_register_chain pid=3555 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:54:49.909000 audit[3555]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffd89ee8670 a2=0 a3=7ffd89ee865c items=0 ppid=3357 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.909000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:54:49.910000 audit[3560]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=3560 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:54:49.910000 audit[3560]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffcad301210 a2=0 a3=7ffcad3011fc items=0 ppid=3357 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:49.910000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:54:50.887733 systemd-networkd[1074]: vxlan.calico: Gained IPv6LL Jul 2 07:54:53.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.138:22-10.0.0.1:39566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:53.846571 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:39566.service. Jul 2 07:54:53.847686 kernel: kauditd_printk_skb: 125 callbacks suppressed Jul 2 07:54:53.847756 kernel: audit: type=1130 audit(1719906893.846:329): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.138:22-10.0.0.1:39566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:53.891000 audit[3567]: USER_ACCT pid=3567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:53.892574 sshd[3567]: Accepted publickey for core from 10.0.0.1 port 39566 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:54:53.896089 sshd[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:53.895000 audit[3567]: CRED_ACQ pid=3567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:53.900511 systemd-logind[1289]: New session 10 of user core. Jul 2 07:54:53.901303 systemd[1]: Started session-10.scope. Jul 2 07:54:53.901589 kernel: audit: type=1101 audit(1719906893.891:330): pid=3567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:53.901785 kernel: audit: type=1103 audit(1719906893.895:331): pid=3567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:53.895000 audit[3567]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5f6535f0 a2=3 a3=0 items=0 ppid=1 pid=3567 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:53.909784 kernel: audit: type=1006 audit(1719906893.895:332): pid=3567 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 2 07:54:53.909835 kernel: audit: type=1300 audit(1719906893.895:332): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5f6535f0 a2=3 a3=0 items=0 ppid=1 pid=3567 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:53.909866 kernel: audit: type=1327 audit(1719906893.895:332): proctitle=737368643A20636F7265205B707269765D Jul 2 07:54:53.895000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:54:53.906000 audit[3567]: USER_START pid=3567 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:53.915426 kernel: audit: type=1105 audit(1719906893.906:333): pid=3567 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:53.915464 kernel: audit: type=1103 audit(1719906893.908:334): pid=3570 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:53.908000 audit[3570]: CRED_ACQ pid=3570 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.062674 sshd[3567]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:54.063000 audit[3567]: USER_END pid=3567 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.065069 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:39576.service. Jul 2 07:54:54.065451 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:39566.service: Deactivated successfully. Jul 2 07:54:54.066101 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 07:54:54.071438 kernel: audit: type=1106 audit(1719906894.063:335): pid=3567 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.071498 kernel: audit: type=1104 audit(1719906894.063:336): pid=3567 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.063000 audit[3567]: CRED_DISP pid=3567 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.138:22-10.0.0.1:39576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:54.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.138:22-10.0.0.1:39566 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:54.069034 systemd-logind[1289]: Session 10 logged out. Waiting for processes to exit. Jul 2 07:54:54.069771 systemd-logind[1289]: Removed session 10. Jul 2 07:54:54.106000 audit[3580]: USER_ACCT pid=3580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.107195 sshd[3580]: Accepted publickey for core from 10.0.0.1 port 39576 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:54:54.107000 audit[3580]: CRED_ACQ pid=3580 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.107000 audit[3580]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9c7f9920 a2=3 a3=0 items=0 ppid=1 pid=3580 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:54.107000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:54:54.108052 sshd[3580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:54.111128 systemd-logind[1289]: New session 11 of user core. Jul 2 07:54:54.111916 systemd[1]: Started session-11.scope. Jul 2 07:54:54.114000 audit[3580]: USER_START pid=3580 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.115000 audit[3585]: CRED_ACQ pid=3585 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.348654 sshd[3580]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:54.348000 audit[3580]: USER_END pid=3580 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.348000 audit[3580]: CRED_DISP pid=3580 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.138:22-10.0.0.1:39592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:54.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.138:22-10.0.0.1:39576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:54.351369 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:39592.service. Jul 2 07:54:54.352188 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:39576.service: Deactivated successfully. Jul 2 07:54:54.353839 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 07:54:54.354424 systemd-logind[1289]: Session 11 logged out. Waiting for processes to exit. Jul 2 07:54:54.357030 systemd-logind[1289]: Removed session 11. Jul 2 07:54:54.392000 audit[3592]: USER_ACCT pid=3592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.393875 sshd[3592]: Accepted publickey for core from 10.0.0.1 port 39592 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:54:54.393000 audit[3592]: CRED_ACQ pid=3592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.393000 audit[3592]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe436174e0 a2=3 a3=0 items=0 ppid=1 pid=3592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:54.393000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:54:54.395111 sshd[3592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:54.398341 systemd-logind[1289]: New session 12 of user core. Jul 2 07:54:54.399088 systemd[1]: Started session-12.scope. Jul 2 07:54:54.401000 audit[3592]: USER_START pid=3592 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.402000 audit[3597]: CRED_ACQ pid=3597 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.511886 sshd[3592]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:54.511000 audit[3592]: USER_END pid=3592 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.511000 audit[3592]: CRED_DISP pid=3592 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:54.514108 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:39592.service: Deactivated successfully. Jul 2 07:54:54.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.138:22-10.0.0.1:39592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:54.515033 systemd-logind[1289]: Session 12 logged out. Waiting for processes to exit. Jul 2 07:54:54.515069 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 07:54:54.515829 systemd-logind[1289]: Removed session 12. Jul 2 07:54:55.394619 env[1302]: time="2024-07-02T07:54:55.394562182Z" level=info msg="StopPodSandbox for \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\"" Jul 2 07:54:55.477685 env[1302]: 2024-07-02 07:54:55.430 [INFO][3634] k8s.go 608: Cleaning up netns ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:54:55.477685 env[1302]: 2024-07-02 07:54:55.430 [INFO][3634] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" iface="eth0" netns="/var/run/netns/cni-479d699e-6336-1fbf-1e80-d728f9e883e3" Jul 2 07:54:55.477685 env[1302]: 2024-07-02 07:54:55.430 [INFO][3634] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" iface="eth0" netns="/var/run/netns/cni-479d699e-6336-1fbf-1e80-d728f9e883e3" Jul 2 07:54:55.477685 env[1302]: 2024-07-02 07:54:55.431 [INFO][3634] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" iface="eth0" netns="/var/run/netns/cni-479d699e-6336-1fbf-1e80-d728f9e883e3" Jul 2 07:54:55.477685 env[1302]: 2024-07-02 07:54:55.431 [INFO][3634] k8s.go 615: Releasing IP address(es) ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:54:55.477685 env[1302]: 2024-07-02 07:54:55.431 [INFO][3634] utils.go 188: Calico CNI releasing IP address ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:54:55.477685 env[1302]: 2024-07-02 07:54:55.466 [INFO][3642] ipam_plugin.go 411: Releasing address using handleID ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" HandleID="k8s-pod-network.72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Workload="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:54:55.477685 env[1302]: 2024-07-02 07:54:55.467 [INFO][3642] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:54:55.477685 env[1302]: 2024-07-02 07:54:55.467 [INFO][3642] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:54:55.477685 env[1302]: 2024-07-02 07:54:55.473 [WARNING][3642] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" HandleID="k8s-pod-network.72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Workload="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:54:55.477685 env[1302]: 2024-07-02 07:54:55.473 [INFO][3642] ipam_plugin.go 439: Releasing address using workloadID ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" HandleID="k8s-pod-network.72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Workload="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:54:55.477685 env[1302]: 2024-07-02 07:54:55.474 [INFO][3642] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:54:55.477685 env[1302]: 2024-07-02 07:54:55.475 [INFO][3634] k8s.go 621: Teardown processing complete. ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:54:55.478181 env[1302]: time="2024-07-02T07:54:55.477829709Z" level=info msg="TearDown network for sandbox \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\" successfully" Jul 2 07:54:55.478181 env[1302]: time="2024-07-02T07:54:55.477865373Z" level=info msg="StopPodSandbox for \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\" returns successfully" Jul 2 07:54:55.478230 kubelet[2216]: E0702 07:54:55.478144 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:55.478741 env[1302]: time="2024-07-02T07:54:55.478709969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4t8xt,Uid:27cc9440-016f-4904-acdc-365f806c13c4,Namespace:kube-system,Attempt:1,}" Jul 2 07:54:55.480066 systemd[1]: run-netns-cni\x2d479d699e\x2d6336\x2d1fbf\x2d1e80\x2dd728f9e883e3.mount: Deactivated successfully. Jul 2 07:54:55.586124 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:54:55.586240 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali35b0bb19d5d: link becomes ready Jul 2 07:54:55.586472 systemd-networkd[1074]: cali35b0bb19d5d: Link UP Jul 2 07:54:55.586619 systemd-networkd[1074]: cali35b0bb19d5d: Gained carrier Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.528 [INFO][3651] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--4t8xt-eth0 coredns-5dd5756b68- kube-system 27cc9440-016f-4904-acdc-365f806c13c4 841 0 2024-07-02 07:54:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-4t8xt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali35b0bb19d5d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" Namespace="kube-system" Pod="coredns-5dd5756b68-4t8xt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4t8xt-" Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.528 [INFO][3651] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" Namespace="kube-system" Pod="coredns-5dd5756b68-4t8xt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.553 [INFO][3665] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" HandleID="k8s-pod-network.484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" Workload="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.560 [INFO][3665] ipam_plugin.go 264: Auto assigning IP ContainerID="484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" HandleID="k8s-pod-network.484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" Workload="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7bd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-4t8xt", "timestamp":"2024-07-02 07:54:55.55388308 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.561 [INFO][3665] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.561 [INFO][3665] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.561 [INFO][3665] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.562 [INFO][3665] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" host="localhost" Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.566 [INFO][3665] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.568 [INFO][3665] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.570 [INFO][3665] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.571 [INFO][3665] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.571 [INFO][3665] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" host="localhost" Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.572 [INFO][3665] ipam.go 1685: Creating new handle: k8s-pod-network.484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.576 [INFO][3665] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" host="localhost" Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.580 [INFO][3665] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" host="localhost" Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.580 [INFO][3665] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" host="localhost" Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.580 [INFO][3665] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:54:55.595230 env[1302]: 2024-07-02 07:54:55.580 [INFO][3665] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" HandleID="k8s-pod-network.484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" Workload="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:54:55.595821 env[1302]: 2024-07-02 07:54:55.582 [INFO][3651] k8s.go 386: Populated endpoint ContainerID="484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" Namespace="kube-system" Pod="coredns-5dd5756b68-4t8xt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--4t8xt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"27cc9440-016f-4904-acdc-365f806c13c4", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-4t8xt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35b0bb19d5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:54:55.595821 env[1302]: 2024-07-02 07:54:55.582 [INFO][3651] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" Namespace="kube-system" Pod="coredns-5dd5756b68-4t8xt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:54:55.595821 env[1302]: 2024-07-02 07:54:55.582 [INFO][3651] dataplane_linux.go 68: Setting the host side veth name to cali35b0bb19d5d ContainerID="484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" Namespace="kube-system" Pod="coredns-5dd5756b68-4t8xt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:54:55.595821 env[1302]: 2024-07-02 07:54:55.586 [INFO][3651] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" Namespace="kube-system" Pod="coredns-5dd5756b68-4t8xt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:54:55.595821 env[1302]: 2024-07-02 07:54:55.586 [INFO][3651] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" Namespace="kube-system" Pod="coredns-5dd5756b68-4t8xt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--4t8xt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"27cc9440-016f-4904-acdc-365f806c13c4", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b", Pod:"coredns-5dd5756b68-4t8xt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35b0bb19d5d", MAC:"12:be:cb:e2:06:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:54:55.595821 env[1302]: 2024-07-02 07:54:55.593 [INFO][3651] k8s.go 500: Wrote updated endpoint to datastore ContainerID="484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b" Namespace="kube-system" Pod="coredns-5dd5756b68-4t8xt" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:54:55.601000 audit[3687]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3687 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:54:55.601000 audit[3687]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffcca7160b0 a2=0 a3=7ffcca71609c items=0 ppid=3357 pid=3687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:55.601000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:54:55.608016 env[1302]: time="2024-07-02T07:54:55.607927595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:54:55.608192 env[1302]: time="2024-07-02T07:54:55.607990204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:54:55.608192 env[1302]: time="2024-07-02T07:54:55.608001347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:54:55.608280 env[1302]: time="2024-07-02T07:54:55.608188602Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b pid=3695 runtime=io.containerd.runc.v2 Jul 2 07:54:55.628519 systemd-resolved[1218]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:54:55.650883 env[1302]: time="2024-07-02T07:54:55.649659400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4t8xt,Uid:27cc9440-016f-4904-acdc-365f806c13c4,Namespace:kube-system,Attempt:1,} returns sandbox id \"484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b\"" Jul 2 07:54:55.651748 kubelet[2216]: E0702 07:54:55.651727 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:55.653467 env[1302]: time="2024-07-02T07:54:55.653446367Z" level=info msg="CreateContainer within sandbox \"484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:54:55.665648 env[1302]: time="2024-07-02T07:54:55.665623638Z" level=info msg="CreateContainer within sandbox \"484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d1791e1b82cfca054e9cfc6961ce66a67805a4194facf8d2dfba355b721d937\"" Jul 2 07:54:55.666169 env[1302]: time="2024-07-02T07:54:55.666151423Z" level=info msg="StartContainer for \"1d1791e1b82cfca054e9cfc6961ce66a67805a4194facf8d2dfba355b721d937\"" Jul 2 07:54:55.700676 env[1302]: time="2024-07-02T07:54:55.700635214Z" level=info msg="StartContainer for \"1d1791e1b82cfca054e9cfc6961ce66a67805a4194facf8d2dfba355b721d937\" returns successfully" Jul 2 07:54:56.394178 env[1302]: time="2024-07-02T07:54:56.394136403Z" level=info msg="StopPodSandbox for \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\"" Jul 2 07:54:56.398748 env[1302]: time="2024-07-02T07:54:56.398702488Z" level=info msg="StopPodSandbox for \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\"" Jul 2 07:54:56.498537 kubelet[2216]: E0702 07:54:56.498194 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:56.580684 env[1302]: 2024-07-02 07:54:56.494 [INFO][3801] k8s.go 608: Cleaning up netns ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:54:56.580684 env[1302]: 2024-07-02 07:54:56.494 [INFO][3801] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" iface="eth0" netns="/var/run/netns/cni-f3686aba-d749-35f1-f413-7624ad36b955" Jul 2 07:54:56.580684 env[1302]: 2024-07-02 07:54:56.494 [INFO][3801] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" iface="eth0" netns="/var/run/netns/cni-f3686aba-d749-35f1-f413-7624ad36b955" Jul 2 07:54:56.580684 env[1302]: 2024-07-02 07:54:56.495 [INFO][3801] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" iface="eth0" netns="/var/run/netns/cni-f3686aba-d749-35f1-f413-7624ad36b955" Jul 2 07:54:56.580684 env[1302]: 2024-07-02 07:54:56.495 [INFO][3801] k8s.go 615: Releasing IP address(es) ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:54:56.580684 env[1302]: 2024-07-02 07:54:56.495 [INFO][3801] utils.go 188: Calico CNI releasing IP address ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:54:56.580684 env[1302]: 2024-07-02 07:54:56.516 [INFO][3818] ipam_plugin.go 411: Releasing address using handleID ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" HandleID="k8s-pod-network.9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Workload="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:54:56.580684 env[1302]: 2024-07-02 07:54:56.516 [INFO][3818] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:54:56.580684 env[1302]: 2024-07-02 07:54:56.516 [INFO][3818] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:54:56.580684 env[1302]: 2024-07-02 07:54:56.572 [WARNING][3818] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" HandleID="k8s-pod-network.9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Workload="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:54:56.580684 env[1302]: 2024-07-02 07:54:56.572 [INFO][3818] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" HandleID="k8s-pod-network.9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Workload="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:54:56.580684 env[1302]: 2024-07-02 07:54:56.573 [INFO][3818] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:54:56.580684 env[1302]: 2024-07-02 07:54:56.578 [INFO][3801] k8s.go 621: Teardown processing complete. ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:54:56.582910 env[1302]: time="2024-07-02T07:54:56.582848496Z" level=info msg="TearDown network for sandbox \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\" successfully" Jul 2 07:54:56.582910 env[1302]: time="2024-07-02T07:54:56.582890734Z" level=info msg="StopPodSandbox for \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\" returns successfully" Jul 2 07:54:56.583161 systemd[1]: run-netns-cni\x2df3686aba\x2dd749\x2d35f1\x2df413\x2d7624ad36b955.mount: Deactivated successfully. Jul 2 07:54:56.584695 env[1302]: time="2024-07-02T07:54:56.584660557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d46f8b8c-b7tbb,Uid:be490d13-b47c-4e6c-9d39-8c2a55153f51,Namespace:calico-system,Attempt:1,}" Jul 2 07:54:56.587871 env[1302]: 2024-07-02 07:54:56.492 [INFO][3802] k8s.go 608: Cleaning up netns ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:54:56.587871 env[1302]: 2024-07-02 07:54:56.493 [INFO][3802] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" iface="eth0" netns="/var/run/netns/cni-50d32323-6d39-ac19-9156-def56d7c562e" Jul 2 07:54:56.587871 env[1302]: 2024-07-02 07:54:56.493 [INFO][3802] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" iface="eth0" netns="/var/run/netns/cni-50d32323-6d39-ac19-9156-def56d7c562e" Jul 2 07:54:56.587871 env[1302]: 2024-07-02 07:54:56.494 [INFO][3802] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" iface="eth0" netns="/var/run/netns/cni-50d32323-6d39-ac19-9156-def56d7c562e" Jul 2 07:54:56.587871 env[1302]: 2024-07-02 07:54:56.494 [INFO][3802] k8s.go 615: Releasing IP address(es) ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:54:56.587871 env[1302]: 2024-07-02 07:54:56.494 [INFO][3802] utils.go 188: Calico CNI releasing IP address ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:54:56.587871 env[1302]: 2024-07-02 07:54:56.534 [INFO][3817] ipam_plugin.go 411: Releasing address using handleID ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" HandleID="k8s-pod-network.2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Workload="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:54:56.587871 env[1302]: 2024-07-02 07:54:56.534 [INFO][3817] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:54:56.587871 env[1302]: 2024-07-02 07:54:56.573 [INFO][3817] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:54:56.587871 env[1302]: 2024-07-02 07:54:56.581 [WARNING][3817] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" HandleID="k8s-pod-network.2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Workload="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:54:56.587871 env[1302]: 2024-07-02 07:54:56.581 [INFO][3817] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" HandleID="k8s-pod-network.2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Workload="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:54:56.587871 env[1302]: 2024-07-02 07:54:56.584 [INFO][3817] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:54:56.587871 env[1302]: 2024-07-02 07:54:56.586 [INFO][3802] k8s.go 621: Teardown processing complete. ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:54:56.590672 systemd[1]: run-netns-cni\x2d50d32323\x2d6d39\x2dac19\x2d9156\x2ddef56d7c562e.mount: Deactivated successfully. Jul 2 07:54:56.592300 env[1302]: time="2024-07-02T07:54:56.592255721Z" level=info msg="TearDown network for sandbox \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\" successfully" Jul 2 07:54:56.592392 env[1302]: time="2024-07-02T07:54:56.592363592Z" level=info msg="StopPodSandbox for \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\" returns successfully" Jul 2 07:54:56.594034 kubelet[2216]: E0702 07:54:56.592749 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:56.594345 kubelet[2216]: I0702 07:54:56.594142 2216 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4t8xt" podStartSLOduration=30.593570691 podCreationTimestamp="2024-07-02 07:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:54:56.580215991 +0000 UTC m=+43.279034667" watchObservedRunningTime="2024-07-02 07:54:56.593570691 +0000 UTC m=+43.292389367" Jul 2 07:54:56.594607 env[1302]: time="2024-07-02T07:54:56.594576336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tddcs,Uid:aca79512-22d0-4402-8f15-275b2ea8d5f5,Namespace:kube-system,Attempt:1,}" Jul 2 07:54:56.655000 audit[3854]: NETFILTER_CFG table=filter:102 family=2 entries=11 op=nft_register_rule pid=3854 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:56.655000 audit[3854]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffca663d410 a2=0 a3=7ffca663d3fc items=0 ppid=2389 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:56.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:56.657000 audit[3854]: NETFILTER_CFG table=nat:103 family=2 entries=35 op=nft_register_chain pid=3854 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:56.657000 audit[3854]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffca663d410 a2=0 a3=7ffca663d3fc items=0 ppid=2389 pid=3854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:56.657000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:56.672000 audit[3862]: NETFILTER_CFG table=filter:104 family=2 entries=8 op=nft_register_rule pid=3862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:56.672000 audit[3862]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd3c76e820 a2=0 a3=7ffd3c76e80c items=0 ppid=2389 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:56.672000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:56.676000 audit[3862]: NETFILTER_CFG table=nat:105 family=2 entries=20 op=nft_register_rule pid=3862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:56.676000 audit[3862]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd3c76e820 a2=0 a3=7ffd3c76e80c items=0 ppid=2389 pid=3862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:56.676000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:56.733182 systemd-networkd[1074]: calica0cc2eeb19: Link UP Jul 2 07:54:56.734815 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:54:56.734873 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calica0cc2eeb19: link becomes ready Jul 2 07:54:56.734961 systemd-networkd[1074]: calica0cc2eeb19: Gained carrier Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.678 [INFO][3831] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--tddcs-eth0 coredns-5dd5756b68- kube-system aca79512-22d0-4402-8f15-275b2ea8d5f5 857 0 2024-07-02 07:54:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-tddcs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calica0cc2eeb19 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" Namespace="kube-system" Pod="coredns-5dd5756b68-tddcs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tddcs-" Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.678 [INFO][3831] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" Namespace="kube-system" Pod="coredns-5dd5756b68-tddcs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.704 [INFO][3863] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" HandleID="k8s-pod-network.84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" Workload="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.712 [INFO][3863] ipam_plugin.go 264: Auto assigning IP ContainerID="84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" HandleID="k8s-pod-network.84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" Workload="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027de80), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-tddcs", "timestamp":"2024-07-02 07:54:56.704778843 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.712 [INFO][3863] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.712 [INFO][3863] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.712 [INFO][3863] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.713 [INFO][3863] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" host="localhost" Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.716 [INFO][3863] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.719 [INFO][3863] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.721 [INFO][3863] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.722 [INFO][3863] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.722 [INFO][3863] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" host="localhost" Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.723 [INFO][3863] ipam.go 1685: Creating new handle: k8s-pod-network.84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.726 [INFO][3863] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" host="localhost" Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.729 [INFO][3863] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" host="localhost" Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.729 [INFO][3863] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" host="localhost" Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.729 [INFO][3863] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:54:56.743414 env[1302]: 2024-07-02 07:54:56.729 [INFO][3863] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" HandleID="k8s-pod-network.84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" Workload="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:54:56.744110 env[1302]: 2024-07-02 07:54:56.731 [INFO][3831] k8s.go 386: Populated endpoint ContainerID="84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" Namespace="kube-system" Pod="coredns-5dd5756b68-tddcs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--tddcs-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"aca79512-22d0-4402-8f15-275b2ea8d5f5", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-tddcs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica0cc2eeb19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:54:56.744110 env[1302]: 2024-07-02 07:54:56.731 [INFO][3831] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" Namespace="kube-system" Pod="coredns-5dd5756b68-tddcs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:54:56.744110 env[1302]: 2024-07-02 07:54:56.731 [INFO][3831] dataplane_linux.go 68: Setting the host side veth name to calica0cc2eeb19 ContainerID="84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" Namespace="kube-system" Pod="coredns-5dd5756b68-tddcs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:54:56.744110 env[1302]: 2024-07-02 07:54:56.735 [INFO][3831] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" Namespace="kube-system" Pod="coredns-5dd5756b68-tddcs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:54:56.744110 env[1302]: 2024-07-02 07:54:56.735 [INFO][3831] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" Namespace="kube-system" Pod="coredns-5dd5756b68-tddcs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--tddcs-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"aca79512-22d0-4402-8f15-275b2ea8d5f5", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc", Pod:"coredns-5dd5756b68-tddcs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica0cc2eeb19", MAC:"ae:22:80:b1:bc:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:54:56.744110 env[1302]: 2024-07-02 07:54:56.741 [INFO][3831] k8s.go 500: Wrote updated endpoint to datastore ContainerID="84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc" Namespace="kube-system" Pod="coredns-5dd5756b68-tddcs" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:54:56.757000 audit[3890]: NETFILTER_CFG table=filter:106 family=2 entries=30 op=nft_register_chain pid=3890 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:54:56.757000 audit[3890]: SYSCALL arch=c000003e syscall=46 success=yes exit=17032 a0=3 a1=7ffcfb97dc20 a2=0 a3=7ffcfb97dc0c items=0 ppid=3357 pid=3890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:56.757000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:54:56.760675 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif5b23881bc8: link becomes ready Jul 2 07:54:56.760616 systemd-networkd[1074]: calif5b23881bc8: Link UP Jul 2 07:54:56.760741 systemd-networkd[1074]: calif5b23881bc8: Gained carrier Jul 2 07:54:56.760921 env[1302]: time="2024-07-02T07:54:56.760876573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:54:56.760994 env[1302]: time="2024-07-02T07:54:56.760915403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:54:56.760994 env[1302]: time="2024-07-02T07:54:56.760925374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:54:56.761171 env[1302]: time="2024-07-02T07:54:56.761069600Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc pid=3897 runtime=io.containerd.runc.v2 Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.677 [INFO][3841] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0 calico-kube-controllers-6d46f8b8c- calico-system be490d13-b47c-4e6c-9d39-8c2a55153f51 856 0 2024-07-02 07:54:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d46f8b8c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d46f8b8c-b7tbb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif5b23881bc8 [] []}} ContainerID="5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" Namespace="calico-system" Pod="calico-kube-controllers-6d46f8b8c-b7tbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-" Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.678 [INFO][3841] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" Namespace="calico-system" Pod="calico-kube-controllers-6d46f8b8c-b7tbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.706 [INFO][3868] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" HandleID="k8s-pod-network.5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" Workload="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.713 [INFO][3868] ipam_plugin.go 264: Auto assigning IP ContainerID="5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" HandleID="k8s-pod-network.5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" Workload="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004fc480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d46f8b8c-b7tbb", "timestamp":"2024-07-02 07:54:56.706337264 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.713 [INFO][3868] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.729 [INFO][3868] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.730 [INFO][3868] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.735 [INFO][3868] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" host="localhost" Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.738 [INFO][3868] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.743 [INFO][3868] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.745 [INFO][3868] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.747 [INFO][3868] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.747 [INFO][3868] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" host="localhost" Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.748 [INFO][3868] ipam.go 1685: Creating new handle: k8s-pod-network.5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.751 [INFO][3868] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" host="localhost" Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.754 [INFO][3868] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" host="localhost" Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.754 [INFO][3868] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" host="localhost" Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.754 [INFO][3868] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:54:56.772056 env[1302]: 2024-07-02 07:54:56.754 [INFO][3868] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" HandleID="k8s-pod-network.5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" Workload="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:54:56.772664 env[1302]: 2024-07-02 07:54:56.757 [INFO][3841] k8s.go 386: Populated endpoint ContainerID="5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" Namespace="calico-system" Pod="calico-kube-controllers-6d46f8b8c-b7tbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0", GenerateName:"calico-kube-controllers-6d46f8b8c-", Namespace:"calico-system", SelfLink:"", UID:"be490d13-b47c-4e6c-9d39-8c2a55153f51", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d46f8b8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d46f8b8c-b7tbb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5b23881bc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:54:56.772664 env[1302]: 2024-07-02 07:54:56.757 [INFO][3841] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" Namespace="calico-system" Pod="calico-kube-controllers-6d46f8b8c-b7tbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:54:56.772664 env[1302]: 2024-07-02 07:54:56.757 [INFO][3841] dataplane_linux.go 68: Setting the host side veth name to calif5b23881bc8 ContainerID="5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" Namespace="calico-system" Pod="calico-kube-controllers-6d46f8b8c-b7tbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:54:56.772664 env[1302]: 2024-07-02 07:54:56.760 [INFO][3841] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" Namespace="calico-system" Pod="calico-kube-controllers-6d46f8b8c-b7tbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:54:56.772664 env[1302]: 2024-07-02 07:54:56.761 [INFO][3841] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" Namespace="calico-system" Pod="calico-kube-controllers-6d46f8b8c-b7tbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0", GenerateName:"calico-kube-controllers-6d46f8b8c-", Namespace:"calico-system", SelfLink:"", UID:"be490d13-b47c-4e6c-9d39-8c2a55153f51", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d46f8b8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c", Pod:"calico-kube-controllers-6d46f8b8c-b7tbb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5b23881bc8", MAC:"ae:77:0c:5d:67:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:54:56.772664 env[1302]: 2024-07-02 07:54:56.767 [INFO][3841] k8s.go 500: Wrote updated endpoint to datastore ContainerID="5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c" Namespace="calico-system" Pod="calico-kube-controllers-6d46f8b8c-b7tbb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:54:56.779000 audit[3932]: NETFILTER_CFG table=filter:107 family=2 entries=42 op=nft_register_chain pid=3932 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:54:56.779000 audit[3932]: SYSCALL arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7fff02600780 a2=0 a3=7fff0260076c items=0 ppid=3357 pid=3932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:56.779000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:54:56.785810 env[1302]: time="2024-07-02T07:54:56.785762108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:54:56.785987 env[1302]: time="2024-07-02T07:54:56.785798844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:54:56.785987 env[1302]: time="2024-07-02T07:54:56.785813123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:54:56.785987 env[1302]: time="2024-07-02T07:54:56.785946938Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c pid=3947 runtime=io.containerd.runc.v2 Jul 2 07:54:56.788510 systemd-resolved[1218]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:54:56.808004 systemd-resolved[1218]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:54:56.813159 env[1302]: time="2024-07-02T07:54:56.813075528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-tddcs,Uid:aca79512-22d0-4402-8f15-275b2ea8d5f5,Namespace:kube-system,Attempt:1,} returns sandbox id \"84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc\"" Jul 2 07:54:56.813911 kubelet[2216]: E0702 07:54:56.813884 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:56.816466 env[1302]: time="2024-07-02T07:54:56.816442792Z" level=info msg="CreateContainer within sandbox \"84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:54:56.835100 env[1302]: time="2024-07-02T07:54:56.835063293Z" level=info msg="CreateContainer within sandbox \"84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cdba4cec1f6ca709ec4827562d0bb2c9b8f8d02257d331c061e5199e51745f60\"" Jul 2 07:54:56.835652 env[1302]: time="2024-07-02T07:54:56.835630237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d46f8b8c-b7tbb,Uid:be490d13-b47c-4e6c-9d39-8c2a55153f51,Namespace:calico-system,Attempt:1,} returns sandbox id \"5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c\"" Jul 2 07:54:56.835851 env[1302]: time="2024-07-02T07:54:56.835833846Z" level=info msg="StartContainer for \"cdba4cec1f6ca709ec4827562d0bb2c9b8f8d02257d331c061e5199e51745f60\"" Jul 2 07:54:56.838660 env[1302]: time="2024-07-02T07:54:56.838623764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 07:54:56.872088 env[1302]: time="2024-07-02T07:54:56.871571071Z" level=info msg="StartContainer for \"cdba4cec1f6ca709ec4827562d0bb2c9b8f8d02257d331c061e5199e51745f60\" returns successfully" Jul 2 07:54:57.095757 systemd-networkd[1074]: cali35b0bb19d5d: Gained IPv6LL Jul 2 07:54:57.501350 kubelet[2216]: E0702 07:54:57.501326 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:57.501671 kubelet[2216]: E0702 07:54:57.501378 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:57.508893 kubelet[2216]: I0702 07:54:57.508669 2216 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-tddcs" podStartSLOduration=31.508636263 podCreationTimestamp="2024-07-02 07:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:54:57.50832265 +0000 UTC m=+44.207141326" watchObservedRunningTime="2024-07-02 07:54:57.508636263 +0000 UTC m=+44.207454939" Jul 2 07:54:57.687000 audit[4029]: NETFILTER_CFG table=filter:108 family=2 entries=8 op=nft_register_rule pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:57.687000 audit[4029]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff8fdb14c0 a2=0 a3=7fff8fdb14ac items=0 ppid=2389 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:57.687000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:57.688000 audit[4029]: NETFILTER_CFG table=nat:109 family=2 entries=44 op=nft_register_rule pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:57.688000 audit[4029]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff8fdb14c0 a2=0 a3=7fff8fdb14ac items=0 ppid=2389 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:57.688000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:58.377774 systemd-networkd[1074]: calif5b23881bc8: Gained IPv6LL Jul 2 07:54:58.395051 env[1302]: time="2024-07-02T07:54:58.395020352Z" level=info msg="StopPodSandbox for \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\"" Jul 2 07:54:58.464353 env[1302]: 2024-07-02 07:54:58.435 [INFO][4048] k8s.go 608: Cleaning up netns ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:54:58.464353 env[1302]: 2024-07-02 07:54:58.435 [INFO][4048] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" iface="eth0" netns="/var/run/netns/cni-1bc51391-ec0a-69f4-5012-132446ad5837" Jul 2 07:54:58.464353 env[1302]: 2024-07-02 07:54:58.435 [INFO][4048] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" iface="eth0" netns="/var/run/netns/cni-1bc51391-ec0a-69f4-5012-132446ad5837" Jul 2 07:54:58.464353 env[1302]: 2024-07-02 07:54:58.435 [INFO][4048] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" iface="eth0" netns="/var/run/netns/cni-1bc51391-ec0a-69f4-5012-132446ad5837" Jul 2 07:54:58.464353 env[1302]: 2024-07-02 07:54:58.435 [INFO][4048] k8s.go 615: Releasing IP address(es) ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:54:58.464353 env[1302]: 2024-07-02 07:54:58.435 [INFO][4048] utils.go 188: Calico CNI releasing IP address ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:54:58.464353 env[1302]: 2024-07-02 07:54:58.455 [INFO][4055] ipam_plugin.go 411: Releasing address using handleID ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" HandleID="k8s-pod-network.3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Workload="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:54:58.464353 env[1302]: 2024-07-02 07:54:58.455 [INFO][4055] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:54:58.464353 env[1302]: 2024-07-02 07:54:58.455 [INFO][4055] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:54:58.464353 env[1302]: 2024-07-02 07:54:58.460 [WARNING][4055] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" HandleID="k8s-pod-network.3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Workload="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:54:58.464353 env[1302]: 2024-07-02 07:54:58.460 [INFO][4055] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" HandleID="k8s-pod-network.3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Workload="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:54:58.464353 env[1302]: 2024-07-02 07:54:58.461 [INFO][4055] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:54:58.464353 env[1302]: 2024-07-02 07:54:58.462 [INFO][4048] k8s.go 621: Teardown processing complete. ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:54:58.466780 systemd[1]: run-netns-cni\x2d1bc51391\x2dec0a\x2d69f4\x2d5012\x2d132446ad5837.mount: Deactivated successfully. Jul 2 07:54:58.467770 env[1302]: time="2024-07-02T07:54:58.467527649Z" level=info msg="TearDown network for sandbox \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\" successfully" Jul 2 07:54:58.467770 env[1302]: time="2024-07-02T07:54:58.467560085Z" level=info msg="StopPodSandbox for \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\" returns successfully" Jul 2 07:54:58.468168 env[1302]: time="2024-07-02T07:54:58.468145474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjtw8,Uid:092f7597-7194-4dd8-8fd0-5b1161264bc5,Namespace:calico-system,Attempt:1,}" Jul 2 07:54:58.503002 kubelet[2216]: E0702 07:54:58.502974 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:58.676095 systemd-networkd[1074]: cali7985a341825: Link UP Jul 2 07:54:58.678556 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:54:58.678615 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7985a341825: link becomes ready Jul 2 07:54:58.678746 systemd-networkd[1074]: cali7985a341825: Gained carrier Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.613 [INFO][4063] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xjtw8-eth0 csi-node-driver- calico-system 092f7597-7194-4dd8-8fd0-5b1161264bc5 888 0 2024-07-02 07:54:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-xjtw8 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali7985a341825 [] []}} ContainerID="0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" Namespace="calico-system" Pod="csi-node-driver-xjtw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xjtw8-" Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.613 [INFO][4063] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" Namespace="calico-system" Pod="csi-node-driver-xjtw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.636 [INFO][4077] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" HandleID="k8s-pod-network.0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" Workload="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.643 [INFO][4077] ipam_plugin.go 264: Auto assigning IP ContainerID="0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" HandleID="k8s-pod-network.0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" Workload="localhost-k8s-csi--node--driver--xjtw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309890), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xjtw8", "timestamp":"2024-07-02 07:54:58.636351673 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.643 [INFO][4077] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.643 [INFO][4077] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.643 [INFO][4077] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.645 [INFO][4077] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" host="localhost" Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.648 [INFO][4077] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.661 [INFO][4077] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.662 [INFO][4077] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.664 [INFO][4077] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.664 [INFO][4077] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" host="localhost" Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.665 [INFO][4077] ipam.go 1685: Creating new handle: k8s-pod-network.0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.668 [INFO][4077] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" host="localhost" Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.672 [INFO][4077] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" host="localhost" Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.672 [INFO][4077] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" host="localhost" Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.672 [INFO][4077] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:54:58.688423 env[1302]: 2024-07-02 07:54:58.672 [INFO][4077] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" HandleID="k8s-pod-network.0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" Workload="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:54:58.688975 env[1302]: 2024-07-02 07:54:58.674 [INFO][4063] k8s.go 386: Populated endpoint ContainerID="0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" Namespace="calico-system" Pod="csi-node-driver-xjtw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xjtw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xjtw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"092f7597-7194-4dd8-8fd0-5b1161264bc5", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xjtw8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali7985a341825", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:54:58.688975 env[1302]: 2024-07-02 07:54:58.674 [INFO][4063] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" Namespace="calico-system" Pod="csi-node-driver-xjtw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:54:58.688975 env[1302]: 2024-07-02 07:54:58.674 [INFO][4063] dataplane_linux.go 68: Setting the host side veth name to cali7985a341825 ContainerID="0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" Namespace="calico-system" Pod="csi-node-driver-xjtw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:54:58.688975 env[1302]: 2024-07-02 07:54:58.679 [INFO][4063] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" Namespace="calico-system" Pod="csi-node-driver-xjtw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:54:58.688975 env[1302]: 2024-07-02 07:54:58.679 [INFO][4063] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" Namespace="calico-system" Pod="csi-node-driver-xjtw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xjtw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xjtw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"092f7597-7194-4dd8-8fd0-5b1161264bc5", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d", Pod:"csi-node-driver-xjtw8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali7985a341825", MAC:"0e:bf:89:e0:08:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:54:58.688975 env[1302]: 2024-07-02 07:54:58.685 [INFO][4063] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d" Namespace="calico-system" Pod="csi-node-driver-xjtw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:54:58.696816 systemd-networkd[1074]: calica0cc2eeb19: Gained IPv6LL Jul 2 07:54:58.697000 audit[4095]: NETFILTER_CFG table=filter:110 family=2 entries=42 op=nft_register_chain pid=4095 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:54:58.697000 audit[4095]: SYSCALL arch=c000003e syscall=46 success=yes exit=21016 a0=3 a1=7ffdbd1eec90 a2=0 a3=7ffdbd1eec7c items=0 ppid=3357 pid=4095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:58.697000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:54:58.704000 audit[4108]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=4108 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:58.704000 audit[4108]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc2b125850 a2=0 a3=7ffc2b12583c items=0 ppid=2389 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:58.704000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:58.706772 env[1302]: time="2024-07-02T07:54:58.706711694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:54:58.706772 env[1302]: time="2024-07-02T07:54:58.706753951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:54:58.706772 env[1302]: time="2024-07-02T07:54:58.706763851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:54:58.712089 env[1302]: time="2024-07-02T07:54:58.706972609Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d pid=4109 runtime=io.containerd.runc.v2 Jul 2 07:54:58.715000 audit[4108]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4108 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:54:58.715000 audit[4108]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc2b125850 a2=0 a3=7ffc2b12583c items=0 ppid=2389 pid=4108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:58.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:54:58.735229 systemd-resolved[1218]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:54:58.746437 env[1302]: time="2024-07-02T07:54:58.746399655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xjtw8,Uid:092f7597-7194-4dd8-8fd0-5b1161264bc5,Namespace:calico-system,Attempt:1,} returns sandbox id \"0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d\"" Jul 2 07:54:58.815998 env[1302]: time="2024-07-02T07:54:58.815956993Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:58.817790 env[1302]: time="2024-07-02T07:54:58.817743912Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:58.819357 env[1302]: time="2024-07-02T07:54:58.819328446Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:58.820702 env[1302]: time="2024-07-02T07:54:58.820664340Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:54:58.821059 env[1302]: time="2024-07-02T07:54:58.821026713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 07:54:58.821548 env[1302]: time="2024-07-02T07:54:58.821503439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 07:54:58.827248 env[1302]: time="2024-07-02T07:54:58.827202387Z" level=info msg="CreateContainer within sandbox \"5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 07:54:58.839032 env[1302]: time="2024-07-02T07:54:58.838992998Z" level=info msg="CreateContainer within sandbox \"5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1c7850c5a3ca436fddeccac5ece29cc635a05ce4ee70dfd66af81888cfa398a8\"" Jul 2 07:54:58.839311 env[1302]: time="2024-07-02T07:54:58.839284685Z" level=info msg="StartContainer for \"1c7850c5a3ca436fddeccac5ece29cc635a05ce4ee70dfd66af81888cfa398a8\"" Jul 2 07:54:58.889990 env[1302]: time="2024-07-02T07:54:58.889939631Z" level=info msg="StartContainer for \"1c7850c5a3ca436fddeccac5ece29cc635a05ce4ee70dfd66af81888cfa398a8\" returns successfully" Jul 2 07:54:59.507784 kubelet[2216]: E0702 07:54:59.507756 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:54:59.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.138:22-10.0.0.1:39602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:59.515412 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:39602.service. Jul 2 07:54:59.517638 kernel: kauditd_printk_skb: 59 callbacks suppressed Jul 2 07:54:59.517699 kernel: audit: type=1130 audit(1719906899.514:368): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.138:22-10.0.0.1:39602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:54:59.517721 kubelet[2216]: I0702 07:54:59.516966 2216 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d46f8b8c-b7tbb" podStartSLOduration=25.533064597 podCreationTimestamp="2024-07-02 07:54:32 +0000 UTC" firstStartedPulling="2024-07-02 07:54:56.83745786 +0000 UTC m=+43.536276536" lastFinishedPulling="2024-07-02 07:54:58.821289911 +0000 UTC m=+45.520108587" observedRunningTime="2024-07-02 07:54:59.515381108 +0000 UTC m=+46.214199784" watchObservedRunningTime="2024-07-02 07:54:59.516896648 +0000 UTC m=+46.215715314" Jul 2 07:54:59.558000 audit[4179]: USER_ACCT pid=4179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:59.559399 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 39602 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:54:59.562000 audit[4179]: CRED_ACQ pid=4179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:59.563639 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:54:59.566975 kernel: audit: type=1101 audit(1719906899.558:369): pid=4179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:59.567091 kernel: audit: type=1103 audit(1719906899.562:370): pid=4179 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:59.567113 kernel: audit: type=1006 audit(1719906899.562:371): pid=4179 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jul 2 07:54:59.567272 systemd-logind[1289]: New session 13 of user core. Jul 2 07:54:59.568096 systemd[1]: Started session-13.scope. Jul 2 07:54:59.569349 kernel: audit: type=1300 audit(1719906899.562:371): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4268c840 a2=3 a3=0 items=0 ppid=1 pid=4179 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:59.562000 audit[4179]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4268c840 a2=3 a3=0 items=0 ppid=1 pid=4179 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:54:59.574557 kernel: audit: type=1327 audit(1719906899.562:371): proctitle=737368643A20636F7265205B707269765D Jul 2 07:54:59.562000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:54:59.579000 audit[4179]: USER_START pid=4179 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:59.581000 audit[4182]: CRED_ACQ pid=4182 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:59.588107 kernel: audit: type=1105 audit(1719906899.579:372): pid=4179 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:59.588205 kernel: audit: type=1103 audit(1719906899.581:373): pid=4182 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:59.680977 sshd[4179]: pam_unix(sshd:session): session closed for user core Jul 2 07:54:59.681000 audit[4179]: USER_END pid=4179 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:59.683768 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:39602.service: Deactivated successfully. Jul 2 07:54:59.684808 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 07:54:59.685182 systemd-logind[1289]: Session 13 logged out. Waiting for processes to exit. Jul 2 07:54:59.686112 systemd-logind[1289]: Removed session 13. Jul 2 07:54:59.681000 audit[4179]: CRED_DISP pid=4179 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:59.690062 kernel: audit: type=1106 audit(1719906899.681:374): pid=4179 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:59.690120 kernel: audit: type=1104 audit(1719906899.681:375): pid=4179 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:54:59.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.138:22-10.0.0.1:39602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:00.458632 env[1302]: time="2024-07-02T07:55:00.458547960Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:00.508906 kubelet[2216]: I0702 07:55:00.508871 2216 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 07:55:00.509385 kubelet[2216]: E0702 07:55:00.509360 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:55:00.547521 env[1302]: time="2024-07-02T07:55:00.547474109Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:00.553655 env[1302]: time="2024-07-02T07:55:00.553618619Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:00.555711 env[1302]: time="2024-07-02T07:55:00.555660870Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:00.556046 env[1302]: time="2024-07-02T07:55:00.556010213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 07:55:00.557213 env[1302]: time="2024-07-02T07:55:00.557188039Z" level=info msg="CreateContainer within sandbox \"0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 07:55:00.577660 env[1302]: time="2024-07-02T07:55:00.577573465Z" level=info msg="CreateContainer within sandbox \"0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"391d53ec2de761cfbafc7a32316aafe9bf2de5c1a6c531d8ce9d8b2cc086af56\"" Jul 2 07:55:00.578100 env[1302]: time="2024-07-02T07:55:00.578075040Z" level=info msg="StartContainer for \"391d53ec2de761cfbafc7a32316aafe9bf2de5c1a6c531d8ce9d8b2cc086af56\"" Jul 2 07:55:00.620566 env[1302]: time="2024-07-02T07:55:00.620521365Z" level=info msg="StartContainer for \"391d53ec2de761cfbafc7a32316aafe9bf2de5c1a6c531d8ce9d8b2cc086af56\" returns successfully" Jul 2 07:55:00.622995 env[1302]: time="2024-07-02T07:55:00.622966098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 07:55:00.680760 systemd-networkd[1074]: cali7985a341825: Gained IPv6LL Jul 2 07:55:02.434037 env[1302]: time="2024-07-02T07:55:02.433989450Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:02.436063 env[1302]: time="2024-07-02T07:55:02.436022955Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:02.438001 env[1302]: time="2024-07-02T07:55:02.437968470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:02.439505 env[1302]: time="2024-07-02T07:55:02.439476734Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:02.439877 env[1302]: time="2024-07-02T07:55:02.439841747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 07:55:02.444395 env[1302]: time="2024-07-02T07:55:02.441425054Z" level=info msg="CreateContainer within sandbox \"0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 07:55:02.460745 env[1302]: time="2024-07-02T07:55:02.460702673Z" level=info msg="CreateContainer within sandbox \"0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"422742cd680285ef0d54e48de843077ba2685bc502b5ae18cf86feee283e7e15\"" Jul 2 07:55:02.461361 env[1302]: time="2024-07-02T07:55:02.461322886Z" level=info msg="StartContainer for \"422742cd680285ef0d54e48de843077ba2685bc502b5ae18cf86feee283e7e15\"" Jul 2 07:55:02.480333 systemd[1]: run-containerd-runc-k8s.io-422742cd680285ef0d54e48de843077ba2685bc502b5ae18cf86feee283e7e15-runc.h1eSb0.mount: Deactivated successfully. Jul 2 07:55:02.554774 env[1302]: time="2024-07-02T07:55:02.554725296Z" level=info msg="StartContainer for \"422742cd680285ef0d54e48de843077ba2685bc502b5ae18cf86feee283e7e15\" returns successfully" Jul 2 07:55:03.477624 kubelet[2216]: I0702 07:55:03.477588 2216 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 07:55:03.477624 kubelet[2216]: I0702 07:55:03.477624 2216 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 07:55:03.549224 kubelet[2216]: I0702 07:55:03.549200 2216 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-xjtw8" podStartSLOduration=27.856571943 podCreationTimestamp="2024-07-02 07:54:32 +0000 UTC" firstStartedPulling="2024-07-02 07:54:58.74744753 +0000 UTC m=+45.446266206" lastFinishedPulling="2024-07-02 07:55:02.440040372 +0000 UTC m=+49.138859048" observedRunningTime="2024-07-02 07:55:03.548824993 +0000 UTC m=+50.247643669" watchObservedRunningTime="2024-07-02 07:55:03.549164785 +0000 UTC m=+50.247983461" Jul 2 07:55:04.685504 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:54502.service. Jul 2 07:55:04.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.138:22-10.0.0.1:54502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:04.686910 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:55:04.687024 kernel: audit: type=1130 audit(1719906904.685:377): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.138:22-10.0.0.1:54502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:04.727000 audit[4283]: USER_ACCT pid=4283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:04.728643 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 54502 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:55:04.729880 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:55:04.728000 audit[4283]: CRED_ACQ pid=4283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:04.734282 systemd[1]: Started session-14.scope. Jul 2 07:55:04.735857 systemd-logind[1289]: New session 14 of user core. Jul 2 07:55:04.736138 kernel: audit: type=1101 audit(1719906904.727:378): pid=4283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:04.736166 kernel: audit: type=1103 audit(1719906904.728:379): pid=4283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:04.736185 kernel: audit: type=1006 audit(1719906904.728:380): pid=4283 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 2 07:55:04.728000 audit[4283]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdca7a39b0 a2=3 a3=0 items=0 ppid=1 pid=4283 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:04.742295 kernel: audit: type=1300 audit(1719906904.728:380): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdca7a39b0 a2=3 a3=0 items=0 ppid=1 pid=4283 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:04.728000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:04.743701 kernel: audit: type=1327 audit(1719906904.728:380): proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:04.743732 kernel: audit: type=1105 audit(1719906904.739:381): pid=4283 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:04.739000 audit[4283]: USER_START pid=4283 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:04.747851 kernel: audit: type=1103 audit(1719906904.740:382): pid=4286 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:04.740000 audit[4286]: CRED_ACQ pid=4286 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:04.850588 sshd[4283]: pam_unix(sshd:session): session closed for user core Jul 2 07:55:04.850000 audit[4283]: USER_END pid=4283 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:04.852791 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:54502.service: Deactivated successfully. Jul 2 07:55:04.853748 systemd-logind[1289]: Session 14 logged out. Waiting for processes to exit. Jul 2 07:55:04.853842 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 07:55:04.854512 systemd-logind[1289]: Removed session 14. Jul 2 07:55:04.850000 audit[4283]: CRED_DISP pid=4283 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:04.859142 kernel: audit: type=1106 audit(1719906904.850:383): pid=4283 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:04.859205 kernel: audit: type=1104 audit(1719906904.850:384): pid=4283 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:04.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.138:22-10.0.0.1:54502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:08.849813 kubelet[2216]: E0702 07:55:08.849455 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:55:09.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.138:22-10.0.0.1:54516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:09.853864 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:54516.service. Jul 2 07:55:09.854997 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:55:09.855041 kernel: audit: type=1130 audit(1719906909.853:386): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.138:22-10.0.0.1:54516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:09.896000 audit[4319]: USER_ACCT pid=4319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:09.897746 sshd[4319]: Accepted publickey for core from 10.0.0.1 port 54516 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:55:09.901000 audit[4319]: CRED_ACQ pid=4319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:09.902428 sshd[4319]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:55:09.906234 kernel: audit: type=1101 audit(1719906909.896:387): pid=4319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:09.906394 kernel: audit: type=1103 audit(1719906909.901:388): pid=4319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:09.906425 kernel: audit: type=1006 audit(1719906909.901:389): pid=4319 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 2 07:55:09.907249 systemd[1]: Started session-15.scope. Jul 2 07:55:09.907518 systemd-logind[1289]: New session 15 of user core. Jul 2 07:55:09.901000 audit[4319]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe20a314c0 a2=3 a3=0 items=0 ppid=1 pid=4319 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:09.913279 kernel: audit: type=1300 audit(1719906909.901:389): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe20a314c0 a2=3 a3=0 items=0 ppid=1 pid=4319 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:09.913330 kernel: audit: type=1327 audit(1719906909.901:389): proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:09.901000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:09.914884 kernel: audit: type=1105 audit(1719906909.911:390): pid=4319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:09.911000 audit[4319]: USER_START pid=4319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:09.912000 audit[4322]: CRED_ACQ pid=4322 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:09.922198 kernel: audit: type=1103 audit(1719906909.912:391): pid=4322 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:10.010344 sshd[4319]: pam_unix(sshd:session): session closed for user core Jul 2 07:55:10.010000 audit[4319]: USER_END pid=4319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:10.012702 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:54516.service: Deactivated successfully. Jul 2 07:55:10.013448 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 07:55:10.010000 audit[4319]: CRED_DISP pid=4319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:10.017147 systemd-logind[1289]: Session 15 logged out. Waiting for processes to exit. Jul 2 07:55:10.017843 systemd-logind[1289]: Removed session 15. Jul 2 07:55:10.019048 kernel: audit: type=1106 audit(1719906910.010:392): pid=4319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:10.019110 kernel: audit: type=1104 audit(1719906910.010:393): pid=4319 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:10.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.138:22-10.0.0.1:54516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:11.716005 kubelet[2216]: I0702 07:55:11.715967 2216 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 07:55:11.767250 systemd[1]: run-containerd-runc-k8s.io-1c7850c5a3ca436fddeccac5ece29cc635a05ce4ee70dfd66af81888cfa398a8-runc.LWNh9k.mount: Deactivated successfully. Jul 2 07:55:13.368460 env[1302]: time="2024-07-02T07:55:13.368423077Z" level=info msg="StopPodSandbox for \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\"" Jul 2 07:55:13.425777 env[1302]: 2024-07-02 07:55:13.396 [WARNING][4404] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0", GenerateName:"calico-kube-controllers-6d46f8b8c-", Namespace:"calico-system", SelfLink:"", UID:"be490d13-b47c-4e6c-9d39-8c2a55153f51", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d46f8b8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c", Pod:"calico-kube-controllers-6d46f8b8c-b7tbb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5b23881bc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:55:13.425777 env[1302]: 2024-07-02 07:55:13.397 [INFO][4404] k8s.go 608: Cleaning up netns ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:55:13.425777 env[1302]: 2024-07-02 07:55:13.397 [INFO][4404] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" iface="eth0" netns="" Jul 2 07:55:13.425777 env[1302]: 2024-07-02 07:55:13.397 [INFO][4404] k8s.go 615: Releasing IP address(es) ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:55:13.425777 env[1302]: 2024-07-02 07:55:13.397 [INFO][4404] utils.go 188: Calico CNI releasing IP address ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:55:13.425777 env[1302]: 2024-07-02 07:55:13.414 [INFO][4411] ipam_plugin.go 411: Releasing address using handleID ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" HandleID="k8s-pod-network.9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Workload="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:55:13.425777 env[1302]: 2024-07-02 07:55:13.414 [INFO][4411] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:55:13.425777 env[1302]: 2024-07-02 07:55:13.414 [INFO][4411] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:55:13.425777 env[1302]: 2024-07-02 07:55:13.421 [WARNING][4411] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" HandleID="k8s-pod-network.9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Workload="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:55:13.425777 env[1302]: 2024-07-02 07:55:13.421 [INFO][4411] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" HandleID="k8s-pod-network.9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Workload="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:55:13.425777 env[1302]: 2024-07-02 07:55:13.422 [INFO][4411] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:55:13.425777 env[1302]: 2024-07-02 07:55:13.424 [INFO][4404] k8s.go 621: Teardown processing complete. ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:55:13.426225 env[1302]: time="2024-07-02T07:55:13.425800383Z" level=info msg="TearDown network for sandbox \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\" successfully" Jul 2 07:55:13.426225 env[1302]: time="2024-07-02T07:55:13.425830787Z" level=info msg="StopPodSandbox for \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\" returns successfully" Jul 2 07:55:13.426382 env[1302]: time="2024-07-02T07:55:13.426344693Z" level=info msg="RemovePodSandbox for \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\"" Jul 2 07:55:13.426434 env[1302]: time="2024-07-02T07:55:13.426383632Z" level=info msg="Forcibly stopping sandbox \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\"" Jul 2 07:55:13.484260 env[1302]: 2024-07-02 07:55:13.457 [WARNING][4438] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0", GenerateName:"calico-kube-controllers-6d46f8b8c-", Namespace:"calico-system", SelfLink:"", UID:"be490d13-b47c-4e6c-9d39-8c2a55153f51", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d46f8b8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f8d9b9e6794ab38ee39282a788d7857365e2082ecef0fa19d97f5e15f22d81c", Pod:"calico-kube-controllers-6d46f8b8c-b7tbb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5b23881bc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:55:13.484260 env[1302]: 2024-07-02 07:55:13.458 [INFO][4438] k8s.go 608: Cleaning up netns ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:55:13.484260 env[1302]: 2024-07-02 07:55:13.458 [INFO][4438] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" iface="eth0" netns="" Jul 2 07:55:13.484260 env[1302]: 2024-07-02 07:55:13.458 [INFO][4438] k8s.go 615: Releasing IP address(es) ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:55:13.484260 env[1302]: 2024-07-02 07:55:13.458 [INFO][4438] utils.go 188: Calico CNI releasing IP address ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:55:13.484260 env[1302]: 2024-07-02 07:55:13.475 [INFO][4445] ipam_plugin.go 411: Releasing address using handleID ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" HandleID="k8s-pod-network.9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Workload="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:55:13.484260 env[1302]: 2024-07-02 07:55:13.475 [INFO][4445] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:55:13.484260 env[1302]: 2024-07-02 07:55:13.475 [INFO][4445] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:55:13.484260 env[1302]: 2024-07-02 07:55:13.480 [WARNING][4445] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" HandleID="k8s-pod-network.9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Workload="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:55:13.484260 env[1302]: 2024-07-02 07:55:13.480 [INFO][4445] ipam_plugin.go 439: Releasing address using workloadID ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" HandleID="k8s-pod-network.9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Workload="localhost-k8s-calico--kube--controllers--6d46f8b8c--b7tbb-eth0" Jul 2 07:55:13.484260 env[1302]: 2024-07-02 07:55:13.481 [INFO][4445] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:55:13.484260 env[1302]: 2024-07-02 07:55:13.482 [INFO][4438] k8s.go 621: Teardown processing complete. ContainerID="9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26" Jul 2 07:55:13.484722 env[1302]: time="2024-07-02T07:55:13.484283719Z" level=info msg="TearDown network for sandbox \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\" successfully" Jul 2 07:55:13.487257 env[1302]: time="2024-07-02T07:55:13.487228858Z" level=info msg="RemovePodSandbox \"9d69e3abd3fd08b82cb94c402e53d009f0cfb121989ea6e81a08be4194fa6d26\" returns successfully" Jul 2 07:55:13.487751 env[1302]: time="2024-07-02T07:55:13.487708152Z" level=info msg="StopPodSandbox for \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\"" Jul 2 07:55:13.561202 env[1302]: 2024-07-02 07:55:13.518 [WARNING][4468] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--tddcs-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"aca79512-22d0-4402-8f15-275b2ea8d5f5", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc", Pod:"coredns-5dd5756b68-tddcs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica0cc2eeb19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:55:13.561202 env[1302]: 2024-07-02 07:55:13.518 [INFO][4468] k8s.go 608: Cleaning up netns ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:55:13.561202 env[1302]: 2024-07-02 07:55:13.518 [INFO][4468] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" iface="eth0" netns="" Jul 2 07:55:13.561202 env[1302]: 2024-07-02 07:55:13.518 [INFO][4468] k8s.go 615: Releasing IP address(es) ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:55:13.561202 env[1302]: 2024-07-02 07:55:13.518 [INFO][4468] utils.go 188: Calico CNI releasing IP address ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:55:13.561202 env[1302]: 2024-07-02 07:55:13.542 [INFO][4476] ipam_plugin.go 411: Releasing address using handleID ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" HandleID="k8s-pod-network.2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Workload="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:55:13.561202 env[1302]: 2024-07-02 07:55:13.543 [INFO][4476] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:55:13.561202 env[1302]: 2024-07-02 07:55:13.543 [INFO][4476] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:55:13.561202 env[1302]: 2024-07-02 07:55:13.554 [WARNING][4476] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" HandleID="k8s-pod-network.2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Workload="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:55:13.561202 env[1302]: 2024-07-02 07:55:13.554 [INFO][4476] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" HandleID="k8s-pod-network.2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Workload="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:55:13.561202 env[1302]: 2024-07-02 07:55:13.555 [INFO][4476] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:55:13.561202 env[1302]: 2024-07-02 07:55:13.556 [INFO][4468] k8s.go 621: Teardown processing complete. ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:55:13.561669 env[1302]: time="2024-07-02T07:55:13.561215271Z" level=info msg="TearDown network for sandbox \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\" successfully" Jul 2 07:55:13.561669 env[1302]: time="2024-07-02T07:55:13.561239184Z" level=info msg="StopPodSandbox for \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\" returns successfully" Jul 2 07:55:13.561669 env[1302]: time="2024-07-02T07:55:13.561510818Z" level=info msg="RemovePodSandbox for \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\"" Jul 2 07:55:13.561669 env[1302]: time="2024-07-02T07:55:13.561542244Z" level=info msg="Forcibly stopping sandbox \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\"" Jul 2 07:55:13.621678 env[1302]: 2024-07-02 07:55:13.594 [WARNING][4499] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--tddcs-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"aca79512-22d0-4402-8f15-275b2ea8d5f5", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84b6b5af38c95eef67308206fa97fffcdee5b9cbee4f5ab6452ab49520a160bc", Pod:"coredns-5dd5756b68-tddcs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calica0cc2eeb19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:55:13.621678 env[1302]: 2024-07-02 07:55:13.594 [INFO][4499] k8s.go 608: Cleaning up netns ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:55:13.621678 env[1302]: 2024-07-02 07:55:13.594 [INFO][4499] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" iface="eth0" netns="" Jul 2 07:55:13.621678 env[1302]: 2024-07-02 07:55:13.594 [INFO][4499] k8s.go 615: Releasing IP address(es) ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:55:13.621678 env[1302]: 2024-07-02 07:55:13.594 [INFO][4499] utils.go 188: Calico CNI releasing IP address ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:55:13.621678 env[1302]: 2024-07-02 07:55:13.611 [INFO][4506] ipam_plugin.go 411: Releasing address using handleID ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" HandleID="k8s-pod-network.2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Workload="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:55:13.621678 env[1302]: 2024-07-02 07:55:13.612 [INFO][4506] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:55:13.621678 env[1302]: 2024-07-02 07:55:13.612 [INFO][4506] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:55:13.621678 env[1302]: 2024-07-02 07:55:13.617 [WARNING][4506] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" HandleID="k8s-pod-network.2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Workload="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:55:13.621678 env[1302]: 2024-07-02 07:55:13.617 [INFO][4506] ipam_plugin.go 439: Releasing address using workloadID ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" HandleID="k8s-pod-network.2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Workload="localhost-k8s-coredns--5dd5756b68--tddcs-eth0" Jul 2 07:55:13.621678 env[1302]: 2024-07-02 07:55:13.618 [INFO][4506] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:55:13.621678 env[1302]: 2024-07-02 07:55:13.619 [INFO][4499] k8s.go 621: Teardown processing complete. ContainerID="2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632" Jul 2 07:55:13.621678 env[1302]: time="2024-07-02T07:55:13.621618369Z" level=info msg="TearDown network for sandbox \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\" successfully" Jul 2 07:55:13.625481 env[1302]: time="2024-07-02T07:55:13.625449065Z" level=info msg="RemovePodSandbox \"2e153cb61d57805a16dd177c8ae0226e448e1f586597c53282cc44e45710f632\" returns successfully" Jul 2 07:55:13.625936 env[1302]: time="2024-07-02T07:55:13.625903546Z" level=info msg="StopPodSandbox for \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\"" Jul 2 07:55:13.681858 env[1302]: 2024-07-02 07:55:13.654 [WARNING][4531] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--4t8xt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"27cc9440-016f-4904-acdc-365f806c13c4", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b", Pod:"coredns-5dd5756b68-4t8xt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35b0bb19d5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:55:13.681858 env[1302]: 2024-07-02 07:55:13.654 [INFO][4531] k8s.go 608: Cleaning up netns ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:55:13.681858 env[1302]: 2024-07-02 07:55:13.654 [INFO][4531] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" iface="eth0" netns="" Jul 2 07:55:13.681858 env[1302]: 2024-07-02 07:55:13.654 [INFO][4531] k8s.go 615: Releasing IP address(es) ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:55:13.681858 env[1302]: 2024-07-02 07:55:13.654 [INFO][4531] utils.go 188: Calico CNI releasing IP address ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:55:13.681858 env[1302]: 2024-07-02 07:55:13.673 [INFO][4538] ipam_plugin.go 411: Releasing address using handleID ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" HandleID="k8s-pod-network.72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Workload="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:55:13.681858 env[1302]: 2024-07-02 07:55:13.673 [INFO][4538] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:55:13.681858 env[1302]: 2024-07-02 07:55:13.673 [INFO][4538] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:55:13.681858 env[1302]: 2024-07-02 07:55:13.678 [WARNING][4538] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" HandleID="k8s-pod-network.72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Workload="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:55:13.681858 env[1302]: 2024-07-02 07:55:13.678 [INFO][4538] ipam_plugin.go 439: Releasing address using workloadID ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" HandleID="k8s-pod-network.72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Workload="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:55:13.681858 env[1302]: 2024-07-02 07:55:13.679 [INFO][4538] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:55:13.681858 env[1302]: 2024-07-02 07:55:13.680 [INFO][4531] k8s.go 621: Teardown processing complete. ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:55:13.682304 env[1302]: time="2024-07-02T07:55:13.681862774Z" level=info msg="TearDown network for sandbox \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\" successfully" Jul 2 07:55:13.682304 env[1302]: time="2024-07-02T07:55:13.681889913Z" level=info msg="StopPodSandbox for \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\" returns successfully" Jul 2 07:55:13.682402 env[1302]: time="2024-07-02T07:55:13.682362405Z" level=info msg="RemovePodSandbox for \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\"" Jul 2 07:55:13.682445 env[1302]: time="2024-07-02T07:55:13.682409238Z" level=info msg="Forcibly stopping sandbox \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\"" Jul 2 07:55:13.739517 env[1302]: 2024-07-02 07:55:13.711 [WARNING][4560] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--4t8xt-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"27cc9440-016f-4904-acdc-365f806c13c4", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"484d0a43daf45ccf8fb03861b2d7ca15758fa287e6cd4551b56e6e63512a536b", Pod:"coredns-5dd5756b68-4t8xt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35b0bb19d5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:55:13.739517 env[1302]: 2024-07-02 07:55:13.712 [INFO][4560] k8s.go 608: Cleaning up netns ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:55:13.739517 env[1302]: 2024-07-02 07:55:13.712 [INFO][4560] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" iface="eth0" netns="" Jul 2 07:55:13.739517 env[1302]: 2024-07-02 07:55:13.712 [INFO][4560] k8s.go 615: Releasing IP address(es) ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:55:13.739517 env[1302]: 2024-07-02 07:55:13.712 [INFO][4560] utils.go 188: Calico CNI releasing IP address ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:55:13.739517 env[1302]: 2024-07-02 07:55:13.730 [INFO][4568] ipam_plugin.go 411: Releasing address using handleID ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" HandleID="k8s-pod-network.72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Workload="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:55:13.739517 env[1302]: 2024-07-02 07:55:13.730 [INFO][4568] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:55:13.739517 env[1302]: 2024-07-02 07:55:13.730 [INFO][4568] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:55:13.739517 env[1302]: 2024-07-02 07:55:13.735 [WARNING][4568] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" HandleID="k8s-pod-network.72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Workload="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:55:13.739517 env[1302]: 2024-07-02 07:55:13.735 [INFO][4568] ipam_plugin.go 439: Releasing address using workloadID ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" HandleID="k8s-pod-network.72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Workload="localhost-k8s-coredns--5dd5756b68--4t8xt-eth0" Jul 2 07:55:13.739517 env[1302]: 2024-07-02 07:55:13.736 [INFO][4568] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:55:13.739517 env[1302]: 2024-07-02 07:55:13.738 [INFO][4560] k8s.go 621: Teardown processing complete. ContainerID="72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257" Jul 2 07:55:13.739974 env[1302]: time="2024-07-02T07:55:13.739522293Z" level=info msg="TearDown network for sandbox \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\" successfully" Jul 2 07:55:13.742628 env[1302]: time="2024-07-02T07:55:13.742605849Z" level=info msg="RemovePodSandbox \"72c0776df27094a7c377f5a8c647cba4b6976cc7d49f9fb8f848919cf1029257\" returns successfully" Jul 2 07:55:13.743092 env[1302]: time="2024-07-02T07:55:13.743027509Z" level=info msg="StopPodSandbox for \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\"" Jul 2 07:55:13.800069 env[1302]: 2024-07-02 07:55:13.772 [WARNING][4590] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xjtw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"092f7597-7194-4dd8-8fd0-5b1161264bc5", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d", Pod:"csi-node-driver-xjtw8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali7985a341825", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:55:13.800069 env[1302]: 2024-07-02 07:55:13.772 [INFO][4590] k8s.go 608: Cleaning up netns ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:55:13.800069 env[1302]: 2024-07-02 07:55:13.772 [INFO][4590] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" iface="eth0" netns="" Jul 2 07:55:13.800069 env[1302]: 2024-07-02 07:55:13.772 [INFO][4590] k8s.go 615: Releasing IP address(es) ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:55:13.800069 env[1302]: 2024-07-02 07:55:13.772 [INFO][4590] utils.go 188: Calico CNI releasing IP address ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:55:13.800069 env[1302]: 2024-07-02 07:55:13.790 [INFO][4598] ipam_plugin.go 411: Releasing address using handleID ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" HandleID="k8s-pod-network.3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Workload="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:55:13.800069 env[1302]: 2024-07-02 07:55:13.790 [INFO][4598] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:55:13.800069 env[1302]: 2024-07-02 07:55:13.790 [INFO][4598] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:55:13.800069 env[1302]: 2024-07-02 07:55:13.795 [WARNING][4598] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" HandleID="k8s-pod-network.3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Workload="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:55:13.800069 env[1302]: 2024-07-02 07:55:13.795 [INFO][4598] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" HandleID="k8s-pod-network.3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Workload="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:55:13.800069 env[1302]: 2024-07-02 07:55:13.797 [INFO][4598] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:55:13.800069 env[1302]: 2024-07-02 07:55:13.798 [INFO][4590] k8s.go 621: Teardown processing complete. ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:55:13.800529 env[1302]: time="2024-07-02T07:55:13.800087230Z" level=info msg="TearDown network for sandbox \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\" successfully" Jul 2 07:55:13.800529 env[1302]: time="2024-07-02T07:55:13.800119798Z" level=info msg="StopPodSandbox for \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\" returns successfully" Jul 2 07:55:13.800575 env[1302]: time="2024-07-02T07:55:13.800560553Z" level=info msg="RemovePodSandbox for \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\"" Jul 2 07:55:13.800636 env[1302]: time="2024-07-02T07:55:13.800583904Z" level=info msg="Forcibly stopping sandbox \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\"" Jul 2 07:55:13.855384 env[1302]: 2024-07-02 07:55:13.829 [WARNING][4621] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xjtw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"092f7597-7194-4dd8-8fd0-5b1161264bc5", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 54, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d32c910bc198e1584723ac15c121188656bb0b7e11a513beb705c2c791fac4d", Pod:"csi-node-driver-xjtw8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali7985a341825", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:55:13.855384 env[1302]: 2024-07-02 07:55:13.829 [INFO][4621] k8s.go 608: Cleaning up netns ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:55:13.855384 env[1302]: 2024-07-02 07:55:13.829 [INFO][4621] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" iface="eth0" netns="" Jul 2 07:55:13.855384 env[1302]: 2024-07-02 07:55:13.829 [INFO][4621] k8s.go 615: Releasing IP address(es) ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:55:13.855384 env[1302]: 2024-07-02 07:55:13.829 [INFO][4621] utils.go 188: Calico CNI releasing IP address ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:55:13.855384 env[1302]: 2024-07-02 07:55:13.846 [INFO][4628] ipam_plugin.go 411: Releasing address using handleID ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" HandleID="k8s-pod-network.3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Workload="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:55:13.855384 env[1302]: 2024-07-02 07:55:13.846 [INFO][4628] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:55:13.855384 env[1302]: 2024-07-02 07:55:13.846 [INFO][4628] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:55:13.855384 env[1302]: 2024-07-02 07:55:13.850 [WARNING][4628] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" HandleID="k8s-pod-network.3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Workload="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:55:13.855384 env[1302]: 2024-07-02 07:55:13.850 [INFO][4628] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" HandleID="k8s-pod-network.3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Workload="localhost-k8s-csi--node--driver--xjtw8-eth0" Jul 2 07:55:13.855384 env[1302]: 2024-07-02 07:55:13.852 [INFO][4628] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:55:13.855384 env[1302]: 2024-07-02 07:55:13.853 [INFO][4621] k8s.go 621: Teardown processing complete. ContainerID="3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03" Jul 2 07:55:13.855831 env[1302]: time="2024-07-02T07:55:13.855402201Z" level=info msg="TearDown network for sandbox \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\" successfully" Jul 2 07:55:13.858537 env[1302]: time="2024-07-02T07:55:13.858502075Z" level=info msg="RemovePodSandbox \"3fdf44f111c59a5865d8a728f4a072a040a4c87dda44773466a304b798f5ce03\" returns successfully" Jul 2 07:55:15.012975 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:35092.service. Jul 2 07:55:15.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.138:22-10.0.0.1:35092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:15.013916 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:55:15.013979 kernel: audit: type=1130 audit(1719906915.012:395): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.138:22-10.0.0.1:35092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:15.055000 audit[4636]: USER_ACCT pid=4636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.056345 sshd[4636]: Accepted publickey for core from 10.0.0.1 port 35092 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:55:15.059156 sshd[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:55:15.058000 audit[4636]: CRED_ACQ pid=4636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.062950 systemd-logind[1289]: New session 16 of user core. Jul 2 07:55:15.063614 kernel: audit: type=1101 audit(1719906915.055:396): pid=4636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.063723 kernel: audit: type=1103 audit(1719906915.058:397): pid=4636 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.063642 systemd[1]: Started session-16.scope. Jul 2 07:55:15.058000 audit[4636]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff378d4c50 a2=3 a3=0 items=0 ppid=1 pid=4636 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:15.070120 kernel: audit: type=1006 audit(1719906915.058:398): pid=4636 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 2 07:55:15.070163 kernel: audit: type=1300 audit(1719906915.058:398): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff378d4c50 a2=3 a3=0 items=0 ppid=1 pid=4636 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:15.070204 kernel: audit: type=1327 audit(1719906915.058:398): proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:15.058000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:15.071428 kernel: audit: type=1105 audit(1719906915.067:399): pid=4636 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.067000 audit[4636]: USER_START pid=4636 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.075660 kernel: audit: type=1103 audit(1719906915.068:400): pid=4639 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.068000 audit[4639]: CRED_ACQ pid=4639 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.168401 sshd[4636]: pam_unix(sshd:session): session closed for user core Jul 2 07:55:15.168000 audit[4636]: USER_END pid=4636 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.170861 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:35100.service. Jul 2 07:55:15.173196 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:35092.service: Deactivated successfully. Jul 2 07:55:15.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.138:22-10.0.0.1:35100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:15.174138 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 07:55:15.174170 systemd-logind[1289]: Session 16 logged out. Waiting for processes to exit. Jul 2 07:55:15.175111 systemd-logind[1289]: Removed session 16. Jul 2 07:55:15.177033 kernel: audit: type=1106 audit(1719906915.168:401): pid=4636 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.177084 kernel: audit: type=1130 audit(1719906915.168:402): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.138:22-10.0.0.1:35100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:15.168000 audit[4636]: CRED_DISP pid=4636 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.138:22-10.0.0.1:35092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:15.210000 audit[4648]: USER_ACCT pid=4648 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.210744 sshd[4648]: Accepted publickey for core from 10.0.0.1 port 35100 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:55:15.211000 audit[4648]: CRED_ACQ pid=4648 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.211000 audit[4648]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc82722860 a2=3 a3=0 items=0 ppid=1 pid=4648 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:15.211000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:15.211903 sshd[4648]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:55:15.215071 systemd-logind[1289]: New session 17 of user core. Jul 2 07:55:15.215757 systemd[1]: Started session-17.scope. Jul 2 07:55:15.218000 audit[4648]: USER_START pid=4648 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.220000 audit[4653]: CRED_ACQ pid=4653 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.375283 sshd[4648]: pam_unix(sshd:session): session closed for user core Jul 2 07:55:15.376000 audit[4648]: USER_END pid=4648 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.376000 audit[4648]: CRED_DISP pid=4648 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.138:22-10.0.0.1:35106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:15.377566 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:35106.service. Jul 2 07:55:15.378581 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:35100.service: Deactivated successfully. Jul 2 07:55:15.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.138:22-10.0.0.1:35100 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:15.379863 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 07:55:15.380435 systemd-logind[1289]: Session 17 logged out. Waiting for processes to exit. Jul 2 07:55:15.381157 systemd-logind[1289]: Removed session 17. Jul 2 07:55:15.418000 audit[4661]: USER_ACCT pid=4661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.419497 sshd[4661]: Accepted publickey for core from 10.0.0.1 port 35106 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:55:15.419000 audit[4661]: CRED_ACQ pid=4661 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.419000 audit[4661]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2cb16300 a2=3 a3=0 items=0 ppid=1 pid=4661 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:15.419000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:15.420390 sshd[4661]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:55:15.423951 systemd-logind[1289]: New session 18 of user core. Jul 2 07:55:15.424688 systemd[1]: Started session-18.scope. Jul 2 07:55:15.428000 audit[4661]: USER_START pid=4661 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:15.429000 audit[4666]: CRED_ACQ pid=4666 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.253000 audit[4679]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=4679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:16.253000 audit[4679]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff1f1d4160 a2=0 a3=7fff1f1d414c items=0 ppid=2389 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:16.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:16.254000 audit[4679]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4679 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:16.254000 audit[4679]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff1f1d4160 a2=0 a3=0 items=0 ppid=2389 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:16.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:16.262750 sshd[4661]: pam_unix(sshd:session): session closed for user core Jul 2 07:55:16.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.138:22-10.0.0.1:35110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:16.263787 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:35110.service. Jul 2 07:55:16.264000 audit[4661]: USER_END pid=4661 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.264000 audit[4661]: CRED_DISP pid=4661 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.266000 audit[4682]: NETFILTER_CFG table=filter:115 family=2 entries=32 op=nft_register_rule pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:16.266000 audit[4682]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff2e0c01c0 a2=0 a3=7fff2e0c01ac items=0 ppid=2389 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:16.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:16.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.138:22-10.0.0.1:35106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:16.267085 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:35106.service: Deactivated successfully. Jul 2 07:55:16.268423 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 07:55:16.268789 systemd-logind[1289]: Session 18 logged out. Waiting for processes to exit. Jul 2 07:55:16.270058 systemd-logind[1289]: Removed session 18. Jul 2 07:55:16.266000 audit[4682]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:16.266000 audit[4682]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff2e0c01c0 a2=0 a3=0 items=0 ppid=2389 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:16.266000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:16.315000 audit[4681]: USER_ACCT pid=4681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.316703 sshd[4681]: Accepted publickey for core from 10.0.0.1 port 35110 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:55:16.316000 audit[4681]: CRED_ACQ pid=4681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.316000 audit[4681]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff19b03640 a2=3 a3=0 items=0 ppid=1 pid=4681 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:16.316000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:16.317765 sshd[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:55:16.321706 systemd-logind[1289]: New session 19 of user core. Jul 2 07:55:16.322065 systemd[1]: Started session-19.scope. Jul 2 07:55:16.325000 audit[4681]: USER_START pid=4681 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.327000 audit[4687]: CRED_ACQ pid=4687 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.610000 audit[4681]: USER_END pid=4681 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.611000 audit[4681]: CRED_DISP pid=4681 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.138:22-10.0.0.1:35120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:16.610142 sshd[4681]: pam_unix(sshd:session): session closed for user core Jul 2 07:55:16.612315 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:35120.service. Jul 2 07:55:16.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.138:22-10.0.0.1:35110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:16.618319 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:35110.service: Deactivated successfully. Jul 2 07:55:16.619463 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 07:55:16.620048 systemd-logind[1289]: Session 19 logged out. Waiting for processes to exit. Jul 2 07:55:16.620920 systemd-logind[1289]: Removed session 19. Jul 2 07:55:16.654000 audit[4695]: USER_ACCT pid=4695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.655222 sshd[4695]: Accepted publickey for core from 10.0.0.1 port 35120 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:55:16.655000 audit[4695]: CRED_ACQ pid=4695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.655000 audit[4695]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc63528c90 a2=3 a3=0 items=0 ppid=1 pid=4695 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:16.655000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:16.656516 sshd[4695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:55:16.661460 systemd[1]: Started session-20.scope. Jul 2 07:55:16.662692 systemd-logind[1289]: New session 20 of user core. Jul 2 07:55:16.666000 audit[4695]: USER_START pid=4695 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.667000 audit[4700]: CRED_ACQ pid=4700 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.773643 sshd[4695]: pam_unix(sshd:session): session closed for user core Jul 2 07:55:16.774000 audit[4695]: USER_END pid=4695 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.774000 audit[4695]: CRED_DISP pid=4695 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:16.776041 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:35120.service: Deactivated successfully. Jul 2 07:55:16.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.138:22-10.0.0.1:35120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:16.777178 systemd-logind[1289]: Session 20 logged out. Waiting for processes to exit. Jul 2 07:55:16.777187 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 07:55:16.778216 systemd-logind[1289]: Removed session 20. Jul 2 07:55:16.822615 kubelet[2216]: I0702 07:55:16.820831 2216 topology_manager.go:215] "Topology Admit Handler" podUID="bf1d1360-b6f8-41f1-a284-61eeadec9f29" podNamespace="calico-apiserver" podName="calico-apiserver-5f54cc6b8b-7t52j" Jul 2 07:55:16.932787 kubelet[2216]: I0702 07:55:16.932743 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bf1d1360-b6f8-41f1-a284-61eeadec9f29-calico-apiserver-certs\") pod \"calico-apiserver-5f54cc6b8b-7t52j\" (UID: \"bf1d1360-b6f8-41f1-a284-61eeadec9f29\") " pod="calico-apiserver/calico-apiserver-5f54cc6b8b-7t52j" Jul 2 07:55:16.932787 kubelet[2216]: I0702 07:55:16.932788 2216 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc2m4\" (UniqueName: \"kubernetes.io/projected/bf1d1360-b6f8-41f1-a284-61eeadec9f29-kube-api-access-dc2m4\") pod \"calico-apiserver-5f54cc6b8b-7t52j\" (UID: \"bf1d1360-b6f8-41f1-a284-61eeadec9f29\") " pod="calico-apiserver/calico-apiserver-5f54cc6b8b-7t52j" Jul 2 07:55:17.034542 kubelet[2216]: E0702 07:55:17.034502 2216 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 07:55:17.035056 kubelet[2216]: E0702 07:55:17.035027 2216 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bf1d1360-b6f8-41f1-a284-61eeadec9f29-calico-apiserver-certs podName:bf1d1360-b6f8-41f1-a284-61eeadec9f29 nodeName:}" failed. No retries permitted until 2024-07-02 07:55:17.534568783 +0000 UTC m=+64.233387459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/bf1d1360-b6f8-41f1-a284-61eeadec9f29-calico-apiserver-certs") pod "calico-apiserver-5f54cc6b8b-7t52j" (UID: "bf1d1360-b6f8-41f1-a284-61eeadec9f29") : secret "calico-apiserver-certs" not found Jul 2 07:55:17.291000 audit[4713]: NETFILTER_CFG table=filter:117 family=2 entries=33 op=nft_register_rule pid=4713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:17.291000 audit[4713]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffff7527860 a2=0 a3=7ffff752784c items=0 ppid=2389 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:17.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:17.292000 audit[4713]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=4713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:17.292000 audit[4713]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffff7527860 a2=0 a3=0 items=0 ppid=2389 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:17.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:17.724891 env[1302]: time="2024-07-02T07:55:17.724846372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f54cc6b8b-7t52j,Uid:bf1d1360-b6f8-41f1-a284-61eeadec9f29,Namespace:calico-apiserver,Attempt:0,}" Jul 2 07:55:17.893859 systemd-networkd[1074]: cali6279ba264a8: Link UP Jul 2 07:55:17.895687 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:55:17.895936 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6279ba264a8: link becomes ready Jul 2 07:55:17.895746 systemd-networkd[1074]: cali6279ba264a8: Gained carrier Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.839 [INFO][4715] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-eth0 calico-apiserver-5f54cc6b8b- calico-apiserver bf1d1360-b6f8-41f1-a284-61eeadec9f29 1090 0 2024-07-02 07:55:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f54cc6b8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f54cc6b8b-7t52j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6279ba264a8 [] []}} ContainerID="54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" Namespace="calico-apiserver" Pod="calico-apiserver-5f54cc6b8b-7t52j" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-" Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.839 [INFO][4715] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" Namespace="calico-apiserver" Pod="calico-apiserver-5f54cc6b8b-7t52j" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-eth0" Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.862 [INFO][4729] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" HandleID="k8s-pod-network.54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" Workload="localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-eth0" Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.869 [INFO][4729] ipam_plugin.go 264: Auto assigning IP ContainerID="54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" HandleID="k8s-pod-network.54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" Workload="localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000297650), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5f54cc6b8b-7t52j", "timestamp":"2024-07-02 07:55:17.862622129 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.869 [INFO][4729] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.869 [INFO][4729] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.869 [INFO][4729] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.870 [INFO][4729] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" host="localhost" Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.875 [INFO][4729] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.879 [INFO][4729] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.880 [INFO][4729] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.882 [INFO][4729] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.882 [INFO][4729] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" host="localhost" Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.883 [INFO][4729] ipam.go 1685: Creating new handle: k8s-pod-network.54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3 Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.885 [INFO][4729] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" host="localhost" Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.888 [INFO][4729] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" host="localhost" Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.888 [INFO][4729] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" host="localhost" Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.888 [INFO][4729] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:55:17.906727 env[1302]: 2024-07-02 07:55:17.888 [INFO][4729] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" HandleID="k8s-pod-network.54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" Workload="localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-eth0" Jul 2 07:55:17.907283 env[1302]: 2024-07-02 07:55:17.891 [INFO][4715] k8s.go 386: Populated endpoint ContainerID="54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" Namespace="calico-apiserver" Pod="calico-apiserver-5f54cc6b8b-7t52j" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-eth0", GenerateName:"calico-apiserver-5f54cc6b8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf1d1360-b6f8-41f1-a284-61eeadec9f29", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f54cc6b8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f54cc6b8b-7t52j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6279ba264a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:55:17.907283 env[1302]: 2024-07-02 07:55:17.891 [INFO][4715] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" Namespace="calico-apiserver" Pod="calico-apiserver-5f54cc6b8b-7t52j" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-eth0" Jul 2 07:55:17.907283 env[1302]: 2024-07-02 07:55:17.891 [INFO][4715] dataplane_linux.go 68: Setting the host side veth name to cali6279ba264a8 ContainerID="54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" Namespace="calico-apiserver" Pod="calico-apiserver-5f54cc6b8b-7t52j" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-eth0" Jul 2 07:55:17.907283 env[1302]: 2024-07-02 07:55:17.896 [INFO][4715] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" Namespace="calico-apiserver" Pod="calico-apiserver-5f54cc6b8b-7t52j" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-eth0" Jul 2 07:55:17.907283 env[1302]: 2024-07-02 07:55:17.896 [INFO][4715] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" Namespace="calico-apiserver" Pod="calico-apiserver-5f54cc6b8b-7t52j" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-eth0", GenerateName:"calico-apiserver-5f54cc6b8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf1d1360-b6f8-41f1-a284-61eeadec9f29", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 55, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f54cc6b8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3", Pod:"calico-apiserver-5f54cc6b8b-7t52j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6279ba264a8", MAC:"d2:8e:c8:e7:f8:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:55:17.907283 env[1302]: 2024-07-02 07:55:17.903 [INFO][4715] k8s.go 500: Wrote updated endpoint to datastore ContainerID="54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3" Namespace="calico-apiserver" Pod="calico-apiserver-5f54cc6b8b-7t52j" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f54cc6b8b--7t52j-eth0" Jul 2 07:55:17.913000 audit[4752]: NETFILTER_CFG table=filter:119 family=2 entries=55 op=nft_register_chain pid=4752 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:55:17.913000 audit[4752]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7ffe826adfd0 a2=0 a3=7ffe826adfbc items=0 ppid=3357 pid=4752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:17.913000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:55:18.022687 env[1302]: time="2024-07-02T07:55:18.021746542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:55:18.022687 env[1302]: time="2024-07-02T07:55:18.021786955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:55:18.022687 env[1302]: time="2024-07-02T07:55:18.021796633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:55:18.022687 env[1302]: time="2024-07-02T07:55:18.021926277Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3 pid=4760 runtime=io.containerd.runc.v2 Jul 2 07:55:18.069203 systemd[1]: run-containerd-runc-k8s.io-54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3-runc.F4YFIY.mount: Deactivated successfully. Jul 2 07:55:18.083130 systemd-resolved[1218]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:55:18.104499 env[1302]: time="2024-07-02T07:55:18.104446922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f54cc6b8b-7t52j,Uid:bf1d1360-b6f8-41f1-a284-61eeadec9f29,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3\"" Jul 2 07:55:18.106621 env[1302]: time="2024-07-02T07:55:18.105967586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 07:55:19.817150 systemd-networkd[1074]: cali6279ba264a8: Gained IPv6LL Jul 2 07:55:21.005818 env[1302]: time="2024-07-02T07:55:21.005767302Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:21.008154 env[1302]: time="2024-07-02T07:55:21.008104351Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:21.009693 env[1302]: time="2024-07-02T07:55:21.009645766Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.28.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:21.011224 env[1302]: time="2024-07-02T07:55:21.011193353Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:55:21.011635 env[1302]: time="2024-07-02T07:55:21.011607670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 07:55:21.013104 env[1302]: time="2024-07-02T07:55:21.013069720Z" level=info msg="CreateContainer within sandbox \"54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 07:55:21.026000 env[1302]: time="2024-07-02T07:55:21.025966007Z" level=info msg="CreateContainer within sandbox \"54855fd4e0906145aa7451ca28732a56172b515e90e61f7d97ff8b2eab8ecac3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d245202ab1d3475627bcb6b326b84bcc056810e061437fdbe8fb95d326853b19\"" Jul 2 07:55:21.026495 env[1302]: time="2024-07-02T07:55:21.026463116Z" level=info msg="StartContainer for \"d245202ab1d3475627bcb6b326b84bcc056810e061437fdbe8fb95d326853b19\"" Jul 2 07:55:21.076478 env[1302]: time="2024-07-02T07:55:21.076420722Z" level=info msg="StartContainer for \"d245202ab1d3475627bcb6b326b84bcc056810e061437fdbe8fb95d326853b19\" returns successfully" Jul 2 07:55:21.585662 kubelet[2216]: I0702 07:55:21.585635 2216 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5f54cc6b8b-7t52j" podStartSLOduration=2.679386376 podCreationTimestamp="2024-07-02 07:55:16 +0000 UTC" firstStartedPulling="2024-07-02 07:55:18.105635564 +0000 UTC m=+64.804454230" lastFinishedPulling="2024-07-02 07:55:21.011825518 +0000 UTC m=+67.710644194" observedRunningTime="2024-07-02 07:55:21.585068322 +0000 UTC m=+68.283886998" watchObservedRunningTime="2024-07-02 07:55:21.58557634 +0000 UTC m=+68.284395016" Jul 2 07:55:21.601635 kernel: kauditd_printk_skb: 66 callbacks suppressed Jul 2 07:55:21.601761 kernel: audit: type=1325 audit(1719906921.599:447): table=filter:120 family=2 entries=34 op=nft_register_rule pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:21.599000 audit[4838]: NETFILTER_CFG table=filter:120 family=2 entries=34 op=nft_register_rule pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:21.599000 audit[4838]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffcef518910 a2=0 a3=7ffcef5188fc items=0 ppid=2389 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:21.608622 kernel: audit: type=1300 audit(1719906921.599:447): arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffcef518910 a2=0 a3=7ffcef5188fc items=0 ppid=2389 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:21.599000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:21.611117 kernel: audit: type=1327 audit(1719906921.599:447): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:21.613000 audit[4838]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:21.613000 audit[4838]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcef518910 a2=0 a3=0 items=0 ppid=2389 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:21.621301 kernel: audit: type=1325 audit(1719906921.613:448): table=nat:121 family=2 entries=20 op=nft_register_rule pid=4838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:21.621350 kernel: audit: type=1300 audit(1719906921.613:448): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcef518910 a2=0 a3=0 items=0 ppid=2389 pid=4838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:21.621375 kernel: audit: type=1327 audit(1719906921.613:448): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:21.613000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:21.619000 audit[4840]: NETFILTER_CFG table=filter:122 family=2 entries=33 op=nft_register_rule pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:21.625939 kernel: audit: type=1325 audit(1719906921.619:449): table=filter:122 family=2 entries=33 op=nft_register_rule pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:21.625982 kernel: audit: type=1300 audit(1719906921.619:449): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc1d4098b0 a2=0 a3=7ffc1d40989c items=0 ppid=2389 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:21.619000 audit[4840]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc1d4098b0 a2=0 a3=7ffc1d40989c items=0 ppid=2389 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:21.619000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:21.633133 kernel: audit: type=1327 audit(1719906921.619:449): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:21.621000 audit[4840]: NETFILTER_CFG table=nat:123 family=2 entries=27 op=nft_register_chain pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:21.621000 audit[4840]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc1d4098b0 a2=0 a3=0 items=0 ppid=2389 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:21.621000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:21.642616 kernel: audit: type=1325 audit(1719906921.621:450): table=nat:123 family=2 entries=27 op=nft_register_chain pid=4840 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:21.776537 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:35124.service. Jul 2 07:55:21.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.138:22-10.0.0.1:35124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:21.818000 audit[4841]: USER_ACCT pid=4841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:21.820221 sshd[4841]: Accepted publickey for core from 10.0.0.1 port 35124 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:55:21.819000 audit[4841]: CRED_ACQ pid=4841 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:21.819000 audit[4841]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc15dad800 a2=3 a3=0 items=0 ppid=1 pid=4841 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:21.819000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:21.821413 sshd[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:55:21.824789 systemd-logind[1289]: New session 21 of user core. Jul 2 07:55:21.825526 systemd[1]: Started session-21.scope. Jul 2 07:55:21.828000 audit[4841]: USER_START pid=4841 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:21.829000 audit[4844]: CRED_ACQ pid=4844 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:21.931075 sshd[4841]: pam_unix(sshd:session): session closed for user core Jul 2 07:55:21.930000 audit[4841]: USER_END pid=4841 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:21.930000 audit[4841]: CRED_DISP pid=4841 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:21.933419 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:35124.service: Deactivated successfully. Jul 2 07:55:21.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.138:22-10.0.0.1:35124 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:21.934582 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 07:55:21.935079 systemd-logind[1289]: Session 21 logged out. Waiting for processes to exit. Jul 2 07:55:21.936166 systemd-logind[1289]: Removed session 21. Jul 2 07:55:22.644000 audit[4857]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=4857 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:22.644000 audit[4857]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd64801220 a2=0 a3=7ffd6480120c items=0 ppid=2389 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:22.644000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:22.645000 audit[4857]: NETFILTER_CFG table=nat:125 family=2 entries=106 op=nft_register_chain pid=4857 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:22.645000 audit[4857]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffd64801220 a2=0 a3=7ffd6480120c items=0 ppid=2389 pid=4857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:22.645000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:24.665000 audit[4862]: NETFILTER_CFG table=filter:126 family=2 entries=8 op=nft_register_rule pid=4862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:24.665000 audit[4862]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd839d9880 a2=0 a3=7ffd839d986c items=0 ppid=2389 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:24.665000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:24.667000 audit[4862]: NETFILTER_CFG table=nat:127 family=2 entries=54 op=nft_register_rule pid=4862 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:55:24.667000 audit[4862]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7ffd839d9880 a2=0 a3=7ffd839d986c items=0 ppid=2389 pid=4862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:24.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:55:26.934002 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:57268.service. Jul 2 07:55:26.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.138:22-10.0.0.1:57268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:26.935068 kernel: kauditd_printk_skb: 25 callbacks suppressed Jul 2 07:55:26.935180 kernel: audit: type=1130 audit(1719906926.932:464): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.138:22-10.0.0.1:57268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:26.972000 audit[4863]: USER_ACCT pid=4863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:26.974313 sshd[4863]: Accepted publickey for core from 10.0.0.1 port 57268 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:55:26.976085 sshd[4863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:55:26.974000 audit[4863]: CRED_ACQ pid=4863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:26.979338 systemd-logind[1289]: New session 22 of user core. Jul 2 07:55:26.980477 systemd[1]: Started session-22.scope. Jul 2 07:55:26.981606 kernel: audit: type=1101 audit(1719906926.972:465): pid=4863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:26.981703 kernel: audit: type=1103 audit(1719906926.974:466): pid=4863 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:26.974000 audit[4863]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3efc9ba0 a2=3 a3=0 items=0 ppid=1 pid=4863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:26.987994 kernel: audit: type=1006 audit(1719906926.974:467): pid=4863 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 2 07:55:26.988046 kernel: audit: type=1300 audit(1719906926.974:467): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3efc9ba0 a2=3 a3=0 items=0 ppid=1 pid=4863 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:26.988090 kernel: audit: type=1327 audit(1719906926.974:467): proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:26.974000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:26.989295 kernel: audit: type=1105 audit(1719906926.983:468): pid=4863 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:26.983000 audit[4863]: USER_START pid=4863 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:26.993444 kernel: audit: type=1103 audit(1719906926.984:469): pid=4866 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:26.984000 audit[4866]: CRED_ACQ pid=4866 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:27.080761 sshd[4863]: pam_unix(sshd:session): session closed for user core Jul 2 07:55:27.079000 audit[4863]: USER_END pid=4863 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:27.083034 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:57268.service: Deactivated successfully. Jul 2 07:55:27.083851 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 07:55:27.080000 audit[4863]: CRED_DISP pid=4863 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:27.089776 kernel: audit: type=1106 audit(1719906927.079:470): pid=4863 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:27.089828 kernel: audit: type=1104 audit(1719906927.080:471): pid=4863 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:27.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.138:22-10.0.0.1:57268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:27.090428 systemd-logind[1289]: Session 22 logged out. Waiting for processes to exit. Jul 2 07:55:27.091089 systemd-logind[1289]: Removed session 22. Jul 2 07:55:28.394581 kubelet[2216]: E0702 07:55:28.394536 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:55:29.394100 kubelet[2216]: E0702 07:55:29.394062 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:55:32.082982 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:57276.service. Jul 2 07:55:32.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.138:22-10.0.0.1:57276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:32.084059 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:55:32.084189 kernel: audit: type=1130 audit(1719906932.081:473): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.138:22-10.0.0.1:57276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:32.123000 audit[4892]: USER_ACCT pid=4892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:32.125211 sshd[4892]: Accepted publickey for core from 10.0.0.1 port 57276 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:55:32.127000 audit[4892]: CRED_ACQ pid=4892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:32.129293 sshd[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:55:32.129658 kernel: audit: type=1101 audit(1719906932.123:474): pid=4892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:32.129689 kernel: audit: type=1103 audit(1719906932.127:475): pid=4892 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:32.132715 systemd-logind[1289]: New session 23 of user core. Jul 2 07:55:32.133496 systemd[1]: Started session-23.scope. Jul 2 07:55:32.135005 kernel: audit: type=1006 audit(1719906932.127:476): pid=4892 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 2 07:55:32.135045 kernel: audit: type=1300 audit(1719906932.127:476): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd25a8e100 a2=3 a3=0 items=0 ppid=1 pid=4892 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:32.127000 audit[4892]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd25a8e100 a2=3 a3=0 items=0 ppid=1 pid=4892 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:32.127000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:32.140294 kernel: audit: type=1327 audit(1719906932.127:476): proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:32.140398 kernel: audit: type=1105 audit(1719906932.136:477): pid=4892 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:32.136000 audit[4892]: USER_START pid=4892 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:32.144466 kernel: audit: type=1103 audit(1719906932.137:478): pid=4895 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:32.137000 audit[4895]: CRED_ACQ pid=4895 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:32.235948 sshd[4892]: pam_unix(sshd:session): session closed for user core Jul 2 07:55:32.235000 audit[4892]: USER_END pid=4892 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:32.238921 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:57276.service: Deactivated successfully. Jul 2 07:55:32.239676 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 07:55:32.235000 audit[4892]: CRED_DISP pid=4892 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:32.243799 systemd-logind[1289]: Session 23 logged out. Waiting for processes to exit. Jul 2 07:55:32.244514 systemd-logind[1289]: Removed session 23. Jul 2 07:55:32.245794 kernel: audit: type=1106 audit(1719906932.235:479): pid=4892 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:32.245898 kernel: audit: type=1104 audit(1719906932.235:480): pid=4892 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:32.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.138:22-10.0.0.1:57276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:32.394313 kubelet[2216]: E0702 07:55:32.394202 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:55:37.238480 systemd[1]: Started sshd@23-10.0.0.138:22-10.0.0.1:46390.service. Jul 2 07:55:37.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.138:22-10.0.0.1:46390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:37.239619 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:55:37.239677 kernel: audit: type=1130 audit(1719906937.237:482): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.138:22-10.0.0.1:46390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:37.282000 audit[4906]: USER_ACCT pid=4906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:37.283881 sshd[4906]: Accepted publickey for core from 10.0.0.1 port 46390 ssh2: RSA SHA256:nleLw6rQrPXDNmjSByoAmmq8uYaZ7eXcVx3kSRwUXrw Jul 2 07:55:37.285928 sshd[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:55:37.284000 audit[4906]: CRED_ACQ pid=4906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:37.289291 systemd-logind[1289]: New session 24 of user core. Jul 2 07:55:37.290050 systemd[1]: Started session-24.scope. Jul 2 07:55:37.291223 kernel: audit: type=1101 audit(1719906937.282:483): pid=4906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:37.291284 kernel: audit: type=1103 audit(1719906937.284:484): pid=4906 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:37.291307 kernel: audit: type=1006 audit(1719906937.284:485): pid=4906 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 2 07:55:37.293629 kernel: audit: type=1300 audit(1719906937.284:485): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd902ed8d0 a2=3 a3=0 items=0 ppid=1 pid=4906 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:37.284000 audit[4906]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd902ed8d0 a2=3 a3=0 items=0 ppid=1 pid=4906 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:37.284000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:37.298871 kernel: audit: type=1327 audit(1719906937.284:485): proctitle=737368643A20636F7265205B707269765D Jul 2 07:55:37.298914 kernel: audit: type=1105 audit(1719906937.292:486): pid=4906 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:37.292000 audit[4906]: USER_START pid=4906 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:37.293000 audit[4909]: CRED_ACQ pid=4909 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:37.306397 kernel: audit: type=1103 audit(1719906937.293:487): pid=4909 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:37.389966 sshd[4906]: pam_unix(sshd:session): session closed for user core Jul 2 07:55:37.389000 audit[4906]: USER_END pid=4906 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:37.392749 systemd[1]: sshd@23-10.0.0.138:22-10.0.0.1:46390.service: Deactivated successfully. Jul 2 07:55:37.393488 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 07:55:37.395956 kernel: audit: type=1106 audit(1719906937.389:488): pid=4906 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:37.396009 kernel: audit: type=1104 audit(1719906937.389:489): pid=4906 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:37.389000 audit[4906]: CRED_DISP pid=4906 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:55:37.395928 systemd-logind[1289]: Session 24 logged out. Waiting for processes to exit. Jul 2 07:55:37.396675 systemd-logind[1289]: Removed session 24. Jul 2 07:55:37.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.138:22-10.0.0.1:46390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:38.288683 systemd[1]: run-containerd-runc-k8s.io-1c7850c5a3ca436fddeccac5ece29cc635a05ce4ee70dfd66af81888cfa398a8-runc.TaqXKZ.mount: Deactivated successfully. Jul 2 07:55:38.394866 kubelet[2216]: E0702 07:55:38.394830 2216 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"