Feb 9 19:44:38.799469 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:44:38.799486 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:44:38.799496 kernel: BIOS-provided physical RAM map: Feb 9 19:44:38.799501 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:44:38.799507 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 19:44:38.799512 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 19:44:38.799518 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 19:44:38.799524 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 19:44:38.799529 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 19:44:38.799536 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 19:44:38.799541 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Feb 9 19:44:38.799546 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 19:44:38.799552 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 19:44:38.799557 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 19:44:38.799564 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 19:44:38.799571 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 19:44:38.799577 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 19:44:38.799583 kernel: NX (Execute Disable) protection: active Feb 9 19:44:38.799588 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 19:44:38.799594 kernel: e820: update [mem 0x9b3f7018-0x9b400c57] usable ==> usable Feb 9 19:44:38.799600 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 9 19:44:38.799606 kernel: e820: update [mem 0x9b1aa018-0x9b1e6e57] usable ==> usable Feb 9 19:44:38.799611 kernel: extended physical RAM map: Feb 9 19:44:38.799617 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:44:38.799623 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 9 19:44:38.799630 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 9 19:44:38.799635 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 9 19:44:38.799641 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 9 19:44:38.799647 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Feb 9 19:44:38.799653 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Feb 9 19:44:38.799658 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b1aa017] usable Feb 9 19:44:38.799664 kernel: reserve setup_data: [mem 0x000000009b1aa018-0x000000009b1e6e57] usable Feb 9 19:44:38.799670 kernel: reserve setup_data: [mem 0x000000009b1e6e58-0x000000009b3f7017] usable Feb 9 19:44:38.799675 kernel: reserve setup_data: [mem 0x000000009b3f7018-0x000000009b400c57] usable Feb 9 19:44:38.799681 kernel: reserve setup_data: [mem 0x000000009b400c58-0x000000009c8eefff] usable Feb 9 19:44:38.799687 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Feb 9 19:44:38.799694 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Feb 9 19:44:38.799699 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 9 19:44:38.799705 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Feb 9 19:44:38.799711 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Feb 9 19:44:38.799719 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 9 19:44:38.799725 kernel: efi: EFI v2.70 by EDK II Feb 9 19:44:38.799732 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 RNG=0x9cb75018 Feb 9 19:44:38.799745 kernel: random: crng init done Feb 9 19:44:38.799752 kernel: SMBIOS 2.8 present. Feb 9 19:44:38.799758 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Feb 9 19:44:38.799764 kernel: Hypervisor detected: KVM Feb 9 19:44:38.799770 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 9 19:44:38.799776 kernel: kvm-clock: cpu 0, msr 34faa001, primary cpu clock Feb 9 19:44:38.799783 kernel: kvm-clock: using sched offset of 4224923226 cycles Feb 9 19:44:38.799790 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 9 19:44:38.799796 kernel: tsc: Detected 2794.750 MHz processor Feb 9 19:44:38.799804 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:44:38.799810 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:44:38.799817 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Feb 9 19:44:38.799823 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:44:38.799830 kernel: Using GB pages for direct mapping Feb 9 19:44:38.799836 kernel: Secure boot disabled Feb 9 19:44:38.799842 kernel: ACPI: Early table checksum verification disabled Feb 9 19:44:38.799849 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 9 19:44:38.799855 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Feb 9 19:44:38.799863 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:44:38.799869 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:44:38.799875 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 9 19:44:38.799882 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:44:38.799888 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:44:38.799895 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 9 19:44:38.799901 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 9 19:44:38.799907 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Feb 9 19:44:38.799914 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Feb 9 19:44:38.799921 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 9 19:44:38.799927 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Feb 9 19:44:38.799934 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Feb 9 19:44:38.799940 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Feb 9 19:44:38.799946 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Feb 9 19:44:38.799952 kernel: No NUMA configuration found Feb 9 19:44:38.799959 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Feb 9 19:44:38.799965 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Feb 9 19:44:38.799972 kernel: Zone ranges: Feb 9 19:44:38.799979 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:44:38.799986 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Feb 9 19:44:38.799992 kernel: Normal empty Feb 9 19:44:38.799998 kernel: Movable zone start for each node Feb 9 19:44:38.800004 kernel: Early memory node ranges Feb 9 19:44:38.800011 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:44:38.800017 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 9 19:44:38.800033 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 9 19:44:38.800040 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Feb 9 19:44:38.800047 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Feb 9 19:44:38.800054 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Feb 9 19:44:38.800060 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Feb 9 19:44:38.800066 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:44:38.800073 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:44:38.800079 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 9 19:44:38.800085 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:44:38.800092 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Feb 9 19:44:38.800098 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Feb 9 19:44:38.800106 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Feb 9 19:44:38.800112 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 9 19:44:38.800118 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 9 19:44:38.800125 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:44:38.800131 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 9 19:44:38.800137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 9 19:44:38.800144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:44:38.800150 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 9 19:44:38.800156 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 9 19:44:38.800163 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:44:38.800170 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 9 19:44:38.800176 kernel: TSC deadline timer available Feb 9 19:44:38.800183 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 9 19:44:38.800189 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 9 19:44:38.800195 kernel: kvm-guest: setup PV sched yield Feb 9 19:44:38.800202 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Feb 9 19:44:38.800208 kernel: Booting paravirtualized kernel on KVM Feb 9 19:44:38.800215 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:44:38.800221 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Feb 9 19:44:38.800229 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u524288 Feb 9 19:44:38.800235 kernel: pcpu-alloc: s185624 r8192 d31464 u524288 alloc=1*2097152 Feb 9 19:44:38.800245 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 9 19:44:38.800253 kernel: kvm-guest: setup async PF for cpu 0 Feb 9 19:44:38.800260 kernel: kvm-guest: stealtime: cpu 0, msr 9ae1c0c0 Feb 9 19:44:38.800267 kernel: kvm-guest: PV spinlocks enabled Feb 9 19:44:38.800273 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:44:38.800280 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Feb 9 19:44:38.800286 kernel: Policy zone: DMA32 Feb 9 19:44:38.800294 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:44:38.800301 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:44:38.800309 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:44:38.800316 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 19:44:38.800323 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:44:38.800330 kernel: Memory: 2400436K/2567000K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 166304K reserved, 0K cma-reserved) Feb 9 19:44:38.800337 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 9 19:44:38.800345 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:44:38.800351 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:44:38.800358 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:44:38.800365 kernel: rcu: RCU event tracing is enabled. Feb 9 19:44:38.800372 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 9 19:44:38.800379 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:44:38.800386 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:44:38.800392 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:44:38.800399 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 9 19:44:38.800407 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 9 19:44:38.800413 kernel: Console: colour dummy device 80x25 Feb 9 19:44:38.800420 kernel: printk: console [ttyS0] enabled Feb 9 19:44:38.800427 kernel: ACPI: Core revision 20210730 Feb 9 19:44:38.800434 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 9 19:44:38.800441 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:44:38.800447 kernel: x2apic enabled Feb 9 19:44:38.800454 kernel: Switched APIC routing to physical x2apic. Feb 9 19:44:38.800461 kernel: kvm-guest: setup PV IPIs Feb 9 19:44:38.800468 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 9 19:44:38.800475 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 9 19:44:38.800482 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 9 19:44:38.800489 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 9 19:44:38.800495 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 9 19:44:38.800502 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 9 19:44:38.800509 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:44:38.800528 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:44:38.800535 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:44:38.800543 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:44:38.800550 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 9 19:44:38.800556 kernel: RETBleed: Mitigation: untrained return thunk Feb 9 19:44:38.800563 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 9 19:44:38.800570 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Feb 9 19:44:38.800577 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:44:38.800584 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:44:38.800591 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:44:38.800597 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:44:38.800606 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 9 19:44:38.800612 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:44:38.800619 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:44:38.800626 kernel: LSM: Security Framework initializing Feb 9 19:44:38.800632 kernel: SELinux: Initializing. Feb 9 19:44:38.800639 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:44:38.800646 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 19:44:38.800653 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 9 19:44:38.800660 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 9 19:44:38.800668 kernel: ... version: 0 Feb 9 19:44:38.800674 kernel: ... bit width: 48 Feb 9 19:44:38.800681 kernel: ... generic registers: 6 Feb 9 19:44:38.800688 kernel: ... value mask: 0000ffffffffffff Feb 9 19:44:38.800695 kernel: ... max period: 00007fffffffffff Feb 9 19:44:38.800701 kernel: ... fixed-purpose events: 0 Feb 9 19:44:38.800708 kernel: ... event mask: 000000000000003f Feb 9 19:44:38.800715 kernel: signal: max sigframe size: 1776 Feb 9 19:44:38.800722 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:44:38.800730 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:44:38.800743 kernel: x86: Booting SMP configuration: Feb 9 19:44:38.800750 kernel: .... node #0, CPUs: #1 Feb 9 19:44:38.800757 kernel: kvm-clock: cpu 1, msr 34faa041, secondary cpu clock Feb 9 19:44:38.800763 kernel: kvm-guest: setup async PF for cpu 1 Feb 9 19:44:38.800770 kernel: kvm-guest: stealtime: cpu 1, msr 9ae9c0c0 Feb 9 19:44:38.800777 kernel: #2 Feb 9 19:44:38.800784 kernel: kvm-clock: cpu 2, msr 34faa081, secondary cpu clock Feb 9 19:44:38.800790 kernel: kvm-guest: setup async PF for cpu 2 Feb 9 19:44:38.800798 kernel: kvm-guest: stealtime: cpu 2, msr 9af1c0c0 Feb 9 19:44:38.800805 kernel: #3 Feb 9 19:44:38.800811 kernel: kvm-clock: cpu 3, msr 34faa0c1, secondary cpu clock Feb 9 19:44:38.800818 kernel: kvm-guest: setup async PF for cpu 3 Feb 9 19:44:38.800825 kernel: kvm-guest: stealtime: cpu 3, msr 9af9c0c0 Feb 9 19:44:38.800831 kernel: smp: Brought up 1 node, 4 CPUs Feb 9 19:44:38.800838 kernel: smpboot: Max logical packages: 1 Feb 9 19:44:38.800845 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 9 19:44:38.800851 kernel: devtmpfs: initialized Feb 9 19:44:38.800858 kernel: x86/mm: Memory block size: 128MB Feb 9 19:44:38.800866 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 9 19:44:38.800873 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 9 19:44:38.800880 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Feb 9 19:44:38.800887 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 9 19:44:38.800893 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 9 19:44:38.800900 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:44:38.800907 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 9 19:44:38.800914 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:44:38.800921 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:44:38.800928 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:44:38.800935 kernel: audit: type=2000 audit(1707507877.348:1): state=initialized audit_enabled=0 res=1 Feb 9 19:44:38.800942 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:44:38.800948 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:44:38.800955 kernel: cpuidle: using governor menu Feb 9 19:44:38.800962 kernel: ACPI: bus type PCI registered Feb 9 19:44:38.800968 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:44:38.800975 kernel: dca service started, version 1.12.1 Feb 9 19:44:38.800983 kernel: PCI: Using configuration type 1 for base access Feb 9 19:44:38.800990 kernel: PCI: Using configuration type 1 for extended access Feb 9 19:44:38.800996 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:44:38.801003 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:44:38.801010 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:44:38.801016 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:44:38.801031 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:44:38.801038 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:44:38.801044 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:44:38.801053 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:44:38.801060 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:44:38.801066 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:44:38.801073 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:44:38.801080 kernel: ACPI: Interpreter enabled Feb 9 19:44:38.801086 kernel: ACPI: PM: (supports S0 S3 S5) Feb 9 19:44:38.801093 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:44:38.801100 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:44:38.801107 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 9 19:44:38.801114 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 9 19:44:38.801227 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 19:44:38.801238 kernel: acpiphp: Slot [3] registered Feb 9 19:44:38.801246 kernel: acpiphp: Slot [4] registered Feb 9 19:44:38.801252 kernel: acpiphp: Slot [5] registered Feb 9 19:44:38.801259 kernel: acpiphp: Slot [6] registered Feb 9 19:44:38.801265 kernel: acpiphp: Slot [7] registered Feb 9 19:44:38.801272 kernel: acpiphp: Slot [8] registered Feb 9 19:44:38.801278 kernel: acpiphp: Slot [9] registered Feb 9 19:44:38.801287 kernel: acpiphp: Slot [10] registered Feb 9 19:44:38.801293 kernel: acpiphp: Slot [11] registered Feb 9 19:44:38.801300 kernel: acpiphp: Slot [12] registered Feb 9 19:44:38.801307 kernel: acpiphp: Slot [13] registered Feb 9 19:44:38.801313 kernel: acpiphp: Slot [14] registered Feb 9 19:44:38.801320 kernel: acpiphp: Slot [15] registered Feb 9 19:44:38.801327 kernel: acpiphp: Slot [16] registered Feb 9 19:44:38.801333 kernel: acpiphp: Slot [17] registered Feb 9 19:44:38.801340 kernel: acpiphp: Slot [18] registered Feb 9 19:44:38.801347 kernel: acpiphp: Slot [19] registered Feb 9 19:44:38.801354 kernel: acpiphp: Slot [20] registered Feb 9 19:44:38.801361 kernel: acpiphp: Slot [21] registered Feb 9 19:44:38.801367 kernel: acpiphp: Slot [22] registered Feb 9 19:44:38.801374 kernel: acpiphp: Slot [23] registered Feb 9 19:44:38.801381 kernel: acpiphp: Slot [24] registered Feb 9 19:44:38.801387 kernel: acpiphp: Slot [25] registered Feb 9 19:44:38.801394 kernel: acpiphp: Slot [26] registered Feb 9 19:44:38.801400 kernel: acpiphp: Slot [27] registered Feb 9 19:44:38.801407 kernel: acpiphp: Slot [28] registered Feb 9 19:44:38.801415 kernel: acpiphp: Slot [29] registered Feb 9 19:44:38.801421 kernel: acpiphp: Slot [30] registered Feb 9 19:44:38.801428 kernel: acpiphp: Slot [31] registered Feb 9 19:44:38.801434 kernel: PCI host bridge to bus 0000:00 Feb 9 19:44:38.801583 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 9 19:44:38.801650 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 9 19:44:38.801711 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 9 19:44:38.801781 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Feb 9 19:44:38.801845 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Feb 9 19:44:38.801907 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 9 19:44:38.801988 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 9 19:44:38.802084 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 9 19:44:38.802168 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 9 19:44:38.802238 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Feb 9 19:44:38.802315 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 9 19:44:38.802384 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 9 19:44:38.802452 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 9 19:44:38.802519 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 9 19:44:38.802593 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 9 19:44:38.802662 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 9 19:44:38.802731 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Feb 9 19:44:38.802815 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Feb 9 19:44:38.802882 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 9 19:44:38.802949 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Feb 9 19:44:38.803017 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 9 19:44:38.803118 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Feb 9 19:44:38.803184 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 9 19:44:38.803265 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Feb 9 19:44:38.803333 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Feb 9 19:44:38.803403 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 9 19:44:38.803471 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Feb 9 19:44:38.803544 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 9 19:44:38.803614 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 9 19:44:38.803681 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 9 19:44:38.803763 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Feb 9 19:44:38.803840 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Feb 9 19:44:38.803910 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 9 19:44:38.803993 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Feb 9 19:44:38.804074 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Feb 9 19:44:38.804143 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 9 19:44:38.804153 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 9 19:44:38.804162 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 9 19:44:38.804169 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 9 19:44:38.804176 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 9 19:44:38.804183 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 9 19:44:38.804190 kernel: iommu: Default domain type: Translated Feb 9 19:44:38.804197 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:44:38.804265 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 9 19:44:38.804332 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 9 19:44:38.804398 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 9 19:44:38.804410 kernel: vgaarb: loaded Feb 9 19:44:38.804417 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:44:38.804424 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:44:38.804431 kernel: PTP clock support registered Feb 9 19:44:38.804437 kernel: Registered efivars operations Feb 9 19:44:38.804444 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:44:38.804451 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 9 19:44:38.804457 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 9 19:44:38.804464 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Feb 9 19:44:38.804472 kernel: e820: reserve RAM buffer [mem 0x9b1aa018-0x9bffffff] Feb 9 19:44:38.804479 kernel: e820: reserve RAM buffer [mem 0x9b3f7018-0x9bffffff] Feb 9 19:44:38.804485 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Feb 9 19:44:38.804492 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Feb 9 19:44:38.804499 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 9 19:44:38.804505 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 9 19:44:38.804512 kernel: clocksource: Switched to clocksource kvm-clock Feb 9 19:44:38.804519 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:44:38.804526 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:44:38.804533 kernel: pnp: PnP ACPI init Feb 9 19:44:38.804609 kernel: pnp 00:02: [dma 2] Feb 9 19:44:38.804619 kernel: pnp: PnP ACPI: found 6 devices Feb 9 19:44:38.804626 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:44:38.804633 kernel: NET: Registered PF_INET protocol family Feb 9 19:44:38.804640 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:44:38.804646 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 19:44:38.804653 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:44:38.804662 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 19:44:38.804669 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 19:44:38.804676 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 19:44:38.804683 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:44:38.804690 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 19:44:38.804696 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:44:38.804703 kernel: NET: Registered PF_XDP protocol family Feb 9 19:44:38.804784 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 9 19:44:38.804866 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 9 19:44:38.804928 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 9 19:44:38.804988 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 9 19:44:38.805074 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 9 19:44:38.805135 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Feb 9 19:44:38.805200 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Feb 9 19:44:38.805273 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 9 19:44:38.805342 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 9 19:44:38.805412 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Feb 9 19:44:38.805421 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:44:38.805428 kernel: Initialise system trusted keyrings Feb 9 19:44:38.805435 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 19:44:38.805442 kernel: Key type asymmetric registered Feb 9 19:44:38.805449 kernel: Asymmetric key parser 'x509' registered Feb 9 19:44:38.805457 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:44:38.805464 kernel: io scheduler mq-deadline registered Feb 9 19:44:38.805471 kernel: io scheduler kyber registered Feb 9 19:44:38.805479 kernel: io scheduler bfq registered Feb 9 19:44:38.805486 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:44:38.805494 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 9 19:44:38.805501 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 9 19:44:38.805508 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 9 19:44:38.805515 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:44:38.805522 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:44:38.805529 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 9 19:44:38.805537 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 9 19:44:38.805545 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 9 19:44:38.805614 kernel: rtc_cmos 00:05: RTC can wake from S4 Feb 9 19:44:38.805626 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 9 19:44:38.805687 kernel: rtc_cmos 00:05: registered as rtc0 Feb 9 19:44:38.805759 kernel: rtc_cmos 00:05: setting system clock to 2024-02-09T19:44:38 UTC (1707507878) Feb 9 19:44:38.805823 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 9 19:44:38.805832 kernel: efifb: probing for efifb Feb 9 19:44:38.805840 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 9 19:44:38.805847 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 9 19:44:38.805854 kernel: efifb: scrolling: redraw Feb 9 19:44:38.805861 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:44:38.805868 kernel: Console: switching to colour frame buffer device 160x50 Feb 9 19:44:38.805875 kernel: fb0: EFI VGA frame buffer device Feb 9 19:44:38.805884 kernel: pstore: Registered efi as persistent store backend Feb 9 19:44:38.805891 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:44:38.805898 kernel: Segment Routing with IPv6 Feb 9 19:44:38.805905 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:44:38.805912 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:44:38.805919 kernel: Key type dns_resolver registered Feb 9 19:44:38.805926 kernel: IPI shorthand broadcast: enabled Feb 9 19:44:38.805934 kernel: sched_clock: Marking stable (353427948, 90476804)->(464353313, -20448561) Feb 9 19:44:38.805941 kernel: registered taskstats version 1 Feb 9 19:44:38.805948 kernel: Loading compiled-in X.509 certificates Feb 9 19:44:38.805956 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:44:38.805963 kernel: Key type .fscrypt registered Feb 9 19:44:38.805970 kernel: Key type fscrypt-provisioning registered Feb 9 19:44:38.805977 kernel: pstore: Using crash dump compression: deflate Feb 9 19:44:38.805984 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:44:38.805991 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:44:38.805998 kernel: ima: No architecture policies found Feb 9 19:44:38.806005 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:44:38.806013 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:44:38.806030 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:44:38.806038 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:44:38.806045 kernel: Run /init as init process Feb 9 19:44:38.806053 kernel: with arguments: Feb 9 19:44:38.806060 kernel: /init Feb 9 19:44:38.806067 kernel: with environment: Feb 9 19:44:38.806073 kernel: HOME=/ Feb 9 19:44:38.806080 kernel: TERM=linux Feb 9 19:44:38.806087 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:44:38.806097 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:44:38.806107 systemd[1]: Detected virtualization kvm. Feb 9 19:44:38.806114 systemd[1]: Detected architecture x86-64. Feb 9 19:44:38.806123 systemd[1]: Running in initrd. Feb 9 19:44:38.806130 systemd[1]: No hostname configured, using default hostname. Feb 9 19:44:38.806137 systemd[1]: Hostname set to . Feb 9 19:44:38.806145 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:44:38.806154 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:44:38.806161 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:44:38.806169 systemd[1]: Reached target cryptsetup.target. Feb 9 19:44:38.806176 systemd[1]: Reached target paths.target. Feb 9 19:44:38.806184 systemd[1]: Reached target slices.target. Feb 9 19:44:38.806191 systemd[1]: Reached target swap.target. Feb 9 19:44:38.806199 systemd[1]: Reached target timers.target. Feb 9 19:44:38.806208 systemd[1]: Listening on iscsid.socket. Feb 9 19:44:38.806215 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:44:38.806223 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:44:38.806230 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:44:38.806238 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:44:38.806246 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:44:38.806253 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:44:38.806261 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:44:38.806269 systemd[1]: Reached target sockets.target. Feb 9 19:44:38.806277 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:44:38.806285 systemd[1]: Finished network-cleanup.service. Feb 9 19:44:38.806293 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:44:38.806300 systemd[1]: Starting systemd-journald.service... Feb 9 19:44:38.806308 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:44:38.806315 systemd[1]: Starting systemd-resolved.service... Feb 9 19:44:38.806323 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:44:38.806330 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:44:38.806338 kernel: audit: type=1130 audit(1707507878.801:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.806348 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:44:38.806355 kernel: audit: type=1130 audit(1707507878.804:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.806365 systemd-journald[196]: Journal started Feb 9 19:44:38.806401 systemd-journald[196]: Runtime Journal (/run/log/journal/ad34ecf64f224bc19229ebbf6856cd39) is 6.0M, max 48.4M, 42.4M free. Feb 9 19:44:38.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.807869 systemd-modules-load[197]: Inserted module 'overlay' Feb 9 19:44:38.811206 systemd[1]: Started systemd-journald.service. Feb 9 19:44:38.811222 kernel: audit: type=1130 audit(1707507878.808:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.808525 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:44:38.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.814929 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:44:38.816672 kernel: audit: type=1130 audit(1707507878.811:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.815940 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:44:38.821048 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:44:38.824294 kernel: audit: type=1130 audit(1707507878.821:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.829391 kernel: audit: type=1130 audit(1707507878.826:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.826236 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:44:38.828165 systemd-resolved[198]: Positive Trust Anchors: Feb 9 19:44:38.828173 systemd-resolved[198]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:44:38.828198 systemd-resolved[198]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:44:38.838191 kernel: audit: type=1130 audit(1707507878.832:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.828859 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:44:38.830275 systemd-resolved[198]: Defaulting to hostname 'linux'. Feb 9 19:44:38.831670 systemd[1]: Started systemd-resolved.service. Feb 9 19:44:38.832345 systemd[1]: Reached target nss-lookup.target. Feb 9 19:44:38.841192 dracut-cmdline[216]: dracut-dracut-053 Feb 9 19:44:38.843042 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:44:38.843058 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:44:38.846633 systemd-modules-load[197]: Inserted module 'br_netfilter' Feb 9 19:44:38.847348 kernel: Bridge firewalling registered Feb 9 19:44:38.862038 kernel: SCSI subsystem initialized Feb 9 19:44:38.872305 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:44:38.872328 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:44:38.873311 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:44:38.876141 systemd-modules-load[197]: Inserted module 'dm_multipath' Feb 9 19:44:38.877256 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:44:38.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.878265 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:44:38.881250 kernel: audit: type=1130 audit(1707507878.877:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.886863 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:44:38.890390 kernel: audit: type=1130 audit(1707507878.887:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.905036 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:44:38.916044 kernel: iscsi: registered transport (tcp) Feb 9 19:44:38.938056 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:44:38.938109 kernel: QLogic iSCSI HBA Driver Feb 9 19:44:38.971443 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:44:38.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:38.972954 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:44:39.019065 kernel: raid6: avx2x4 gen() 28773 MB/s Feb 9 19:44:39.036043 kernel: raid6: avx2x4 xor() 7371 MB/s Feb 9 19:44:39.053047 kernel: raid6: avx2x2 gen() 31047 MB/s Feb 9 19:44:39.070044 kernel: raid6: avx2x2 xor() 19277 MB/s Feb 9 19:44:39.087047 kernel: raid6: avx2x1 gen() 26030 MB/s Feb 9 19:44:39.104045 kernel: raid6: avx2x1 xor() 15287 MB/s Feb 9 19:44:39.121045 kernel: raid6: sse2x4 gen() 14750 MB/s Feb 9 19:44:39.138047 kernel: raid6: sse2x4 xor() 7226 MB/s Feb 9 19:44:39.155046 kernel: raid6: sse2x2 gen() 14488 MB/s Feb 9 19:44:39.172054 kernel: raid6: sse2x2 xor() 8653 MB/s Feb 9 19:44:39.189059 kernel: raid6: sse2x1 gen() 10171 MB/s Feb 9 19:44:39.206128 kernel: raid6: sse2x1 xor() 6321 MB/s Feb 9 19:44:39.206163 kernel: raid6: using algorithm avx2x2 gen() 31047 MB/s Feb 9 19:44:39.206177 kernel: raid6: .... xor() 19277 MB/s, rmw enabled Feb 9 19:44:39.207217 kernel: raid6: using avx2x2 recovery algorithm Feb 9 19:44:39.222054 kernel: xor: automatically using best checksumming function avx Feb 9 19:44:39.348056 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:44:39.358140 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:44:39.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:39.359000 audit: BPF prog-id=7 op=LOAD Feb 9 19:44:39.359000 audit: BPF prog-id=8 op=LOAD Feb 9 19:44:39.360156 systemd[1]: Starting systemd-udevd.service... Feb 9 19:44:39.371675 systemd-udevd[401]: Using default interface naming scheme 'v252'. Feb 9 19:44:39.375387 systemd[1]: Started systemd-udevd.service. Feb 9 19:44:39.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:39.378370 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:44:39.388471 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Feb 9 19:44:39.417756 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:44:39.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:39.420583 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:44:39.454472 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:44:39.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:39.486170 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 9 19:44:39.492186 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 19:44:39.492241 kernel: GPT:9289727 != 19775487 Feb 9 19:44:39.492257 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 19:44:39.492275 kernel: GPT:9289727 != 19775487 Feb 9 19:44:39.492283 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 19:44:39.492293 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:44:39.494065 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:44:39.501048 kernel: libata version 3.00 loaded. Feb 9 19:44:39.509051 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 9 19:44:39.522055 kernel: scsi host0: ata_piix Feb 9 19:44:39.525164 kernel: scsi host1: ata_piix Feb 9 19:44:39.525337 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Feb 9 19:44:39.525355 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Feb 9 19:44:39.528457 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:44:39.528485 kernel: AES CTR mode by8 optimization enabled Feb 9 19:44:39.539047 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (468) Feb 9 19:44:39.541004 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:44:39.543603 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:44:39.557999 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:44:39.564673 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:44:39.571470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:44:39.574322 systemd[1]: Starting disk-uuid.service... Feb 9 19:44:39.580342 disk-uuid[526]: Primary Header is updated. Feb 9 19:44:39.580342 disk-uuid[526]: Secondary Entries is updated. Feb 9 19:44:39.580342 disk-uuid[526]: Secondary Header is updated. Feb 9 19:44:39.583444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:44:39.683053 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 9 19:44:39.685064 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 9 19:44:39.717351 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 9 19:44:39.717681 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:44:39.735049 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Feb 9 19:44:40.590066 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 9 19:44:40.590425 disk-uuid[527]: The operation has completed successfully. Feb 9 19:44:40.611552 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:44:40.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.611631 systemd[1]: Finished disk-uuid.service. Feb 9 19:44:40.618620 systemd[1]: Starting verity-setup.service... Feb 9 19:44:40.630038 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 9 19:44:40.647207 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:44:40.648747 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:44:40.652382 systemd[1]: Finished verity-setup.service. Feb 9 19:44:40.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.704047 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:44:40.704303 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:44:40.704927 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:44:40.705458 systemd[1]: Starting ignition-setup.service... Feb 9 19:44:40.707252 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:44:40.713256 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:44:40.713294 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:44:40.713308 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:44:40.720985 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:44:40.727899 systemd[1]: Finished ignition-setup.service. Feb 9 19:44:40.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.728850 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:44:40.762047 ignition[636]: Ignition 2.14.0 Feb 9 19:44:40.762089 ignition[636]: Stage: fetch-offline Feb 9 19:44:40.762131 ignition[636]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:44:40.762139 ignition[636]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:44:40.762229 ignition[636]: parsed url from cmdline: "" Feb 9 19:44:40.762232 ignition[636]: no config URL provided Feb 9 19:44:40.762237 ignition[636]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:44:40.762244 ignition[636]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:44:40.762263 ignition[636]: op(1): [started] loading QEMU firmware config module Feb 9 19:44:40.762267 ignition[636]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 9 19:44:40.766631 ignition[636]: op(1): [finished] loading QEMU firmware config module Feb 9 19:44:40.772444 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:44:40.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.774000 audit: BPF prog-id=9 op=LOAD Feb 9 19:44:40.775180 systemd[1]: Starting systemd-networkd.service... Feb 9 19:44:40.830883 ignition[636]: parsing config with SHA512: 1ed30a2574aebbbaf834b82b47e683a7a80136bd7e3b93bbb43ce71d4948062d507edcf4fd9d191d52ef9ef4cccd2a2be1393c74767e00eed82420d342078c4e Feb 9 19:44:40.848602 systemd-networkd[721]: lo: Link UP Feb 9 19:44:40.848613 systemd-networkd[721]: lo: Gained carrier Feb 9 19:44:40.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.849321 systemd-networkd[721]: Enumeration completed Feb 9 19:44:40.849426 systemd[1]: Started systemd-networkd.service. Feb 9 19:44:40.849516 systemd-networkd[721]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:44:40.850258 systemd-networkd[721]: eth0: Link UP Feb 9 19:44:40.850261 systemd-networkd[721]: eth0: Gained carrier Feb 9 19:44:40.850400 systemd[1]: Reached target network.target. Feb 9 19:44:40.852153 systemd[1]: Starting iscsiuio.service... Feb 9 19:44:40.856142 systemd[1]: Started iscsiuio.service. Feb 9 19:44:40.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.857441 systemd[1]: Starting iscsid.service... Feb 9 19:44:40.860148 iscsid[726]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:44:40.860148 iscsid[726]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:44:40.860148 iscsid[726]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:44:40.860148 iscsid[726]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:44:40.860148 iscsid[726]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:44:40.860148 iscsid[726]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:44:40.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.861084 systemd-networkd[721]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 19:44:40.861318 systemd[1]: Started iscsid.service. Feb 9 19:44:40.866118 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:44:40.873803 unknown[636]: fetched base config from "system" Feb 9 19:44:40.874254 unknown[636]: fetched user config from "qemu" Feb 9 19:44:40.875685 ignition[636]: fetch-offline: fetch-offline passed Feb 9 19:44:40.876092 ignition[636]: Ignition finished successfully Feb 9 19:44:40.877498 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:44:40.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.878833 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:44:40.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.880167 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:44:40.881282 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:44:40.882468 systemd[1]: Reached target remote-fs.target. Feb 9 19:44:40.884302 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:44:40.885370 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 9 19:44:40.887036 systemd[1]: Starting ignition-kargs.service... Feb 9 19:44:40.891318 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:44:40.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.895229 ignition[736]: Ignition 2.14.0 Feb 9 19:44:40.895239 ignition[736]: Stage: kargs Feb 9 19:44:40.895329 ignition[736]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:44:40.895338 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:44:40.896511 ignition[736]: kargs: kargs passed Feb 9 19:44:40.896550 ignition[736]: Ignition finished successfully Feb 9 19:44:40.899880 systemd[1]: Finished ignition-kargs.service. Feb 9 19:44:40.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.901209 systemd[1]: Starting ignition-disks.service... Feb 9 19:44:40.909205 ignition[746]: Ignition 2.14.0 Feb 9 19:44:40.909213 ignition[746]: Stage: disks Feb 9 19:44:40.909294 ignition[746]: no configs at "/usr/lib/ignition/base.d" Feb 9 19:44:40.909303 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:44:40.910484 ignition[746]: disks: disks passed Feb 9 19:44:40.910520 ignition[746]: Ignition finished successfully Feb 9 19:44:40.913954 systemd[1]: Finished ignition-disks.service. Feb 9 19:44:40.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.914230 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:44:40.914456 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:44:40.914685 systemd[1]: Reached target local-fs.target. Feb 9 19:44:40.914910 systemd[1]: Reached target sysinit.target. Feb 9 19:44:40.915272 systemd[1]: Reached target basic.target. Feb 9 19:44:40.919992 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:44:40.931439 systemd-fsck[754]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 9 19:44:40.935668 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:44:40.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.937491 systemd[1]: Mounting sysroot.mount... Feb 9 19:44:40.945040 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:44:40.945404 systemd[1]: Mounted sysroot.mount. Feb 9 19:44:40.946444 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:44:40.948405 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:44:40.949785 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 19:44:40.949834 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:44:40.950887 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:44:40.954243 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:44:40.956047 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:44:40.959923 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:44:40.964007 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:44:40.967556 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:44:40.970573 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:44:40.995348 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:44:40.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:40.997939 systemd[1]: Starting ignition-mount.service... Feb 9 19:44:40.999269 systemd[1]: Starting sysroot-boot.service... Feb 9 19:44:41.003649 bash[805]: umount: /sysroot/usr/share/oem: not mounted. Feb 9 19:44:41.011854 ignition[806]: INFO : Ignition 2.14.0 Feb 9 19:44:41.011854 ignition[806]: INFO : Stage: mount Feb 9 19:44:41.013237 ignition[806]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:44:41.013237 ignition[806]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:44:41.016583 ignition[806]: INFO : mount: mount passed Feb 9 19:44:41.017357 ignition[806]: INFO : Ignition finished successfully Feb 9 19:44:41.018910 systemd[1]: Finished sysroot-boot.service. Feb 9 19:44:41.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:41.020446 systemd[1]: Finished ignition-mount.service. Feb 9 19:44:41.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:41.656912 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:44:41.662045 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Feb 9 19:44:41.662096 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:44:41.663516 kernel: BTRFS info (device vda6): using free space tree Feb 9 19:44:41.663527 kernel: BTRFS info (device vda6): has skinny extents Feb 9 19:44:41.666575 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:44:41.667751 systemd[1]: Starting ignition-files.service... Feb 9 19:44:41.682138 ignition[835]: INFO : Ignition 2.14.0 Feb 9 19:44:41.682138 ignition[835]: INFO : Stage: files Feb 9 19:44:41.683398 ignition[835]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:44:41.683398 ignition[835]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:44:41.683398 ignition[835]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:44:41.686068 ignition[835]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:44:41.686068 ignition[835]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:44:41.688003 ignition[835]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:44:41.688960 ignition[835]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:44:41.690403 unknown[835]: wrote ssh authorized keys file for user: core Feb 9 19:44:41.691204 ignition[835]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:44:41.692765 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:44:41.694136 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:44:41.725389 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:44:41.830137 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:44:41.831690 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:44:41.831690 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:44:42.129221 systemd-networkd[721]: eth0: Gained IPv6LL Feb 9 19:44:42.157053 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:44:42.268202 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:44:42.268202 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:44:42.271600 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:44:42.271600 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:44:42.535617 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:44:42.611399 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:44:42.611399 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:44:42.614992 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:44:42.614992 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:44:42.617381 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:44:42.617381 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:44:42.737406 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:44:42.897106 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:44:42.897106 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:44:42.901441 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:44:42.901441 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:44:42.943859 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:44:43.118128 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 9 19:44:43.120236 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:44:43.120236 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:44:43.120236 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:44:43.173313 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 19:44:43.659280 ignition[835]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:44:43.659280 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:44:43.662410 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:44:43.662410 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:44:43.662410 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:44:43.665824 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:44:43.666953 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:44:43.668112 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:44:43.669249 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:44:43.670404 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:44:43.671539 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:44:43.672683 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:44:43.673882 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:44:43.675072 ignition[835]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:44:43.676226 ignition[835]: INFO : files: op(10): [started] processing unit "prepare-helm.service" Feb 9 19:44:43.677117 ignition[835]: INFO : files: op(10): op(11): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:44:43.678440 ignition[835]: INFO : files: op(10): op(11): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:44:43.678440 ignition[835]: INFO : files: op(10): [finished] processing unit "prepare-helm.service" Feb 9 19:44:43.678440 ignition[835]: INFO : files: op(12): [started] processing unit "coreos-metadata.service" Feb 9 19:44:43.681347 ignition[835]: INFO : files: op(12): op(13): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:44:43.682622 ignition[835]: INFO : files: op(12): op(13): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 9 19:44:43.682622 ignition[835]: INFO : files: op(12): [finished] processing unit "coreos-metadata.service" Feb 9 19:44:43.682622 ignition[835]: INFO : files: op(14): [started] processing unit "containerd.service" Feb 9 19:44:43.682622 ignition[835]: INFO : files: op(14): op(15): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:44:43.686969 ignition[835]: INFO : files: op(14): op(15): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:44:43.686969 ignition[835]: INFO : files: op(14): [finished] processing unit "containerd.service" Feb 9 19:44:43.686969 ignition[835]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:44:43.686969 ignition[835]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:44:43.691509 ignition[835]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:44:43.691509 ignition[835]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:44:43.691509 ignition[835]: INFO : files: op(18): [started] processing unit "prepare-critools.service" Feb 9 19:44:43.691509 ignition[835]: INFO : files: op(18): op(19): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:44:43.695847 ignition[835]: INFO : files: op(18): op(19): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:44:43.695847 ignition[835]: INFO : files: op(18): [finished] processing unit "prepare-critools.service" Feb 9 19:44:43.695847 ignition[835]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:44:43.695847 ignition[835]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:44:43.695847 ignition[835]: INFO : files: op(1b): [started] setting preset to disabled for "coreos-metadata.service" Feb 9 19:44:43.695847 ignition[835]: INFO : files: op(1b): op(1c): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:44:43.716522 ignition[835]: INFO : files: op(1b): op(1c): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 9 19:44:43.717795 ignition[835]: INFO : files: op(1b): [finished] setting preset to disabled for "coreos-metadata.service" Feb 9 19:44:43.717795 ignition[835]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:44:43.717795 ignition[835]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:44:43.717795 ignition[835]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:44:43.717795 ignition[835]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:44:43.717795 ignition[835]: INFO : files: createResultFile: createFiles: op(1f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:44:43.717795 ignition[835]: INFO : files: createResultFile: createFiles: op(1f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:44:43.717795 ignition[835]: INFO : files: files passed Feb 9 19:44:43.717795 ignition[835]: INFO : Ignition finished successfully Feb 9 19:44:43.726575 systemd[1]: Finished ignition-files.service. Feb 9 19:44:43.730263 kernel: kauditd_printk_skb: 25 callbacks suppressed Feb 9 19:44:43.730282 kernel: audit: type=1130 audit(1707507883.726:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.730285 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:44:43.730597 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:44:43.731132 systemd[1]: Starting ignition-quench.service... Feb 9 19:44:43.733610 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:44:43.738728 kernel: audit: type=1130 audit(1707507883.733:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.738744 kernel: audit: type=1131 audit(1707507883.733:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.733685 systemd[1]: Finished ignition-quench.service. Feb 9 19:44:43.742930 initrd-setup-root-after-ignition[860]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Feb 9 19:44:43.745209 initrd-setup-root-after-ignition[862]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:44:43.746646 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:44:43.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.747984 systemd[1]: Reached target ignition-complete.target. Feb 9 19:44:43.751133 kernel: audit: type=1130 audit(1707507883.747:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.751186 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:44:43.762611 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:44:43.762697 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:44:43.767418 kernel: audit: type=1130 audit(1707507883.762:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.767435 kernel: audit: type=1131 audit(1707507883.763:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.763306 systemd[1]: Reached target initrd-fs.target. Feb 9 19:44:43.768594 systemd[1]: Reached target initrd.target. Feb 9 19:44:43.769010 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:44:43.770006 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:44:43.780699 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:44:43.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.783788 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:44:43.784741 kernel: audit: type=1130 audit(1707507883.780:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.792190 systemd[1]: Stopped target network.target. Feb 9 19:44:43.792576 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:44:43.793402 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:44:43.793636 systemd[1]: Stopped target timers.target. Feb 9 19:44:43.793841 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:44:43.799331 kernel: audit: type=1131 audit(1707507883.796:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.793918 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:44:43.796703 systemd[1]: Stopped target initrd.target. Feb 9 19:44:43.799667 systemd[1]: Stopped target basic.target. Feb 9 19:44:43.799872 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:44:43.800207 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:44:43.802332 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:44:43.803387 systemd[1]: Stopped target remote-fs.target. Feb 9 19:44:43.804435 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:44:43.805464 systemd[1]: Stopped target sysinit.target. Feb 9 19:44:43.806449 systemd[1]: Stopped target local-fs.target. Feb 9 19:44:43.807425 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:44:43.807634 systemd[1]: Stopped target swap.target. Feb 9 19:44:43.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.807830 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:44:43.813765 kernel: audit: type=1131 audit(1707507883.810:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.807908 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:44:43.810157 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:44:43.817296 kernel: audit: type=1131 audit(1707507883.814:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.812945 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:44:43.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.813032 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:44:43.814883 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:44:43.814961 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:44:43.817647 systemd[1]: Stopped target paths.target. Feb 9 19:44:43.818738 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:44:43.822061 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:44:43.822318 systemd[1]: Stopped target slices.target. Feb 9 19:44:43.823548 systemd[1]: Stopped target sockets.target. Feb 9 19:44:43.823756 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:44:43.823813 systemd[1]: Closed iscsid.socket. Feb 9 19:44:43.825572 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:44:43.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.825640 systemd[1]: Closed iscsiuio.socket. Feb 9 19:44:43.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.825935 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:44:43.826013 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:44:43.827251 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:44:43.827326 systemd[1]: Stopped ignition-files.service. Feb 9 19:44:43.828968 systemd[1]: Stopping ignition-mount.service... Feb 9 19:44:43.829943 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:44:43.831917 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:44:43.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.833041 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:44:43.837028 ignition[875]: INFO : Ignition 2.14.0 Feb 9 19:44:43.837028 ignition[875]: INFO : Stage: umount Feb 9 19:44:43.837028 ignition[875]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 9 19:44:43.837028 ignition[875]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 9 19:44:43.837028 ignition[875]: INFO : umount: umount passed Feb 9 19:44:43.837028 ignition[875]: INFO : Ignition finished successfully Feb 9 19:44:43.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.833683 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:44:43.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.833803 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:44:43.834784 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:44:43.834865 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:44:43.837957 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:44:43.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.838932 systemd[1]: Stopped ignition-mount.service. Feb 9 19:44:43.840206 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:44:43.846000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:44:43.840277 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:44:43.843118 systemd-networkd[721]: eth0: DHCPv6 lease lost Feb 9 19:44:43.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.844276 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:44:43.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.844357 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:44:43.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.846715 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:44:43.846781 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:44:43.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.847295 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:44:43.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.847325 systemd[1]: Stopped ignition-disks.service. Feb 9 19:44:43.847497 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:44:43.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.847523 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:44:43.847833 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:44:43.847861 systemd[1]: Stopped ignition-setup.service. Feb 9 19:44:43.848759 systemd[1]: Stopping network-cleanup.service... Feb 9 19:44:43.851010 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:44:43.851061 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:44:43.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.858000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.859000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:44:43.852424 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:44:43.852455 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:44:43.853920 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:44:43.853949 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:44:43.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.854381 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:44:43.856254 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:44:43.856664 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:44:43.856725 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:44:43.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.861300 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:44:43.861366 systemd[1]: Stopped network-cleanup.service. Feb 9 19:44:43.864218 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:44:43.864335 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:44:43.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.865142 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:44:43.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.865172 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:44:43.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.866341 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:44:43.866364 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:44:43.866538 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:44:43.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.866567 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:44:43.866785 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:44:43.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.866813 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:44:43.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.867000 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:44:43.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.867052 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:44:43.871500 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:44:43.871718 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:44:43.871757 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:44:43.874136 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:44:43.874166 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:44:43.874636 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:44:43.874667 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:44:43.875676 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 19:44:43.876453 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:44:43.876516 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:44:43.892984 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:44:43.895509 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:44:43.895586 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:44:43.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.896839 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:44:43.898009 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:44:43.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:43.898056 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:44:43.899034 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:44:43.903868 systemd[1]: Switching root. Feb 9 19:44:43.904000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:44:43.904000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:44:43.908000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:44:43.908000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:44:43.908000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:44:43.924548 iscsid[726]: iscsid shutting down. Feb 9 19:44:43.925200 systemd-journald[196]: Received SIGTERM from PID 1 (n/a). Feb 9 19:44:43.925232 systemd-journald[196]: Journal stopped Feb 9 19:44:46.664691 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:44:46.664736 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:44:46.664749 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:44:46.664759 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:44:46.664773 kernel: SELinux: policy capability open_perms=1 Feb 9 19:44:46.664783 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:44:46.664793 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:44:46.664802 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:44:46.664812 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:44:46.664821 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:44:46.664830 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:44:46.664842 systemd[1]: Successfully loaded SELinux policy in 33.862ms. Feb 9 19:44:46.664863 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.130ms. Feb 9 19:44:46.664875 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:44:46.664885 systemd[1]: Detected virtualization kvm. Feb 9 19:44:46.664895 systemd[1]: Detected architecture x86-64. Feb 9 19:44:46.664905 systemd[1]: Detected first boot. Feb 9 19:44:46.664915 systemd[1]: Initializing machine ID from VM UUID. Feb 9 19:44:46.664925 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:44:46.664936 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:44:46.664947 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:44:46.664960 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:44:46.664972 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:44:46.664983 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:44:46.664992 systemd[1]: Unnecessary job was removed for dev-vda6.device. Feb 9 19:44:46.665003 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:44:46.665013 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:44:46.665047 systemd[1]: Created slice system-getty.slice. Feb 9 19:44:46.665058 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:44:46.665068 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:44:46.665078 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:44:46.665088 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:44:46.665098 systemd[1]: Created slice user.slice. Feb 9 19:44:46.665108 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:44:46.665119 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:44:46.665129 systemd[1]: Set up automount boot.automount. Feb 9 19:44:46.665141 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:44:46.665150 systemd[1]: Reached target integritysetup.target. Feb 9 19:44:46.665160 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:44:46.665170 systemd[1]: Reached target remote-fs.target. Feb 9 19:44:46.665180 systemd[1]: Reached target slices.target. Feb 9 19:44:46.665190 systemd[1]: Reached target swap.target. Feb 9 19:44:46.665200 systemd[1]: Reached target torcx.target. Feb 9 19:44:46.665210 systemd[1]: Reached target veritysetup.target. Feb 9 19:44:46.665224 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:44:46.665237 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:44:46.665246 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:44:46.665257 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:44:46.665268 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:44:46.665278 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:44:46.665288 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:44:46.665299 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:44:46.665309 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:44:46.665319 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:44:46.665330 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:44:46.665340 systemd[1]: Mounting media.mount... Feb 9 19:44:46.665351 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:44:46.665361 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:44:46.665372 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:44:46.665382 systemd[1]: Mounting tmp.mount... Feb 9 19:44:46.665392 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:44:46.665402 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:44:46.665412 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:44:46.665423 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:44:46.665433 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:44:46.665443 systemd[1]: Starting modprobe@drm.service... Feb 9 19:44:46.665453 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:44:46.665464 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:44:46.665479 systemd[1]: Starting modprobe@loop.service... Feb 9 19:44:46.665491 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:44:46.665502 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:44:46.665513 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:44:46.665522 systemd[1]: Starting systemd-journald.service... Feb 9 19:44:46.665532 kernel: loop: module loaded Feb 9 19:44:46.665542 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:44:46.665552 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:44:46.665572 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:44:46.665582 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:44:46.665592 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:44:46.665602 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:44:46.665614 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:44:46.665624 systemd[1]: Mounted media.mount. Feb 9 19:44:46.665637 systemd-journald[1011]: Journal started Feb 9 19:44:46.665676 systemd-journald[1011]: Runtime Journal (/run/log/journal/ad34ecf64f224bc19229ebbf6856cd39) is 6.0M, max 48.4M, 42.4M free. Feb 9 19:44:46.607000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:44:46.607000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 19:44:46.663000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:44:46.663000 audit[1011]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc29d2caf0 a2=4000 a3=7ffc29d2cb8c items=0 ppid=1 pid=1011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:46.663000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:44:46.670140 systemd[1]: Started systemd-journald.service. Feb 9 19:44:46.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.667929 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:44:46.668531 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:44:46.669206 systemd[1]: Mounted tmp.mount. Feb 9 19:44:46.669961 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:44:46.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.670805 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:44:46.671244 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:44:46.672053 kernel: fuse: init (API version 7.34) Feb 9 19:44:46.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.672488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:44:46.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.672735 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:44:46.673650 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:44:46.673875 systemd[1]: Finished modprobe@drm.service. Feb 9 19:44:46.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.674658 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:44:46.674937 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:44:46.675784 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:44:46.676059 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:44:46.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.676834 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:44:46.677146 systemd[1]: Finished modprobe@loop.service. Feb 9 19:44:46.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.679593 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:44:46.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.681062 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:44:46.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.682504 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:44:46.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.683690 systemd[1]: Reached target network-pre.target. Feb 9 19:44:46.685719 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:44:46.687621 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:44:46.688222 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:44:46.689410 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:44:46.691080 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:44:46.691743 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:44:46.692564 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:44:46.693155 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:44:46.696899 systemd-journald[1011]: Time spent on flushing to /var/log/journal/ad34ecf64f224bc19229ebbf6856cd39 is 14.394ms for 1122 entries. Feb 9 19:44:46.696899 systemd-journald[1011]: System Journal (/var/log/journal/ad34ecf64f224bc19229ebbf6856cd39) is 8.0M, max 195.6M, 187.6M free. Feb 9 19:44:46.727291 systemd-journald[1011]: Received client request to flush runtime journal. Feb 9 19:44:46.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.694011 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:44:46.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.698073 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:44:46.699092 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:44:46.699774 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:44:46.701393 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:44:46.705682 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:44:46.706408 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:44:46.708103 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:44:46.722212 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:44:46.724359 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:44:46.727384 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:44:46.728582 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:44:46.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:46.733039 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:44:46.734120 udevadm[1066]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:44:46.746380 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:44:46.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:47.175136 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:44:47.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:47.177148 systemd[1]: Starting systemd-udevd.service... Feb 9 19:44:47.191416 systemd-udevd[1073]: Using default interface naming scheme 'v252'. Feb 9 19:44:47.201786 systemd[1]: Started systemd-udevd.service. Feb 9 19:44:47.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:47.204472 systemd[1]: Starting systemd-networkd.service... Feb 9 19:44:47.208467 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:44:47.239496 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:44:47.248405 systemd[1]: Started systemd-userdbd.service. Feb 9 19:44:47.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:47.262293 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:44:47.270042 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 9 19:44:47.287036 kernel: ACPI: button: Power Button [PWRF] Feb 9 19:44:47.305876 systemd-networkd[1085]: lo: Link UP Feb 9 19:44:47.305884 systemd-networkd[1085]: lo: Gained carrier Feb 9 19:44:47.306332 systemd-networkd[1085]: Enumeration completed Feb 9 19:44:47.306429 systemd[1]: Started systemd-networkd.service. Feb 9 19:44:47.306443 systemd-networkd[1085]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:44:47.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:47.307907 systemd-networkd[1085]: eth0: Link UP Feb 9 19:44:47.307915 systemd-networkd[1085]: eth0: Gained carrier Feb 9 19:44:47.300000 audit[1091]: AVC avc: denied { confidentiality } for pid=1091 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:44:47.300000 audit[1091]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=563bd2424fd0 a1=32194 a2=7f9191450bc5 a3=5 items=108 ppid=1073 pid=1091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:47.300000 audit: CWD cwd="/" Feb 9 19:44:47.300000 audit: PATH item=0 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=1 name=(null) inode=13026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=2 name=(null) inode=13026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=3 name=(null) inode=13027 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=4 name=(null) inode=13026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=5 name=(null) inode=13028 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=6 name=(null) inode=13026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=7 name=(null) inode=13029 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=8 name=(null) inode=13029 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=9 name=(null) inode=13030 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=10 name=(null) inode=13029 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=11 name=(null) inode=13031 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=12 name=(null) inode=13029 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=13 name=(null) inode=13032 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=14 name=(null) inode=13029 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=15 name=(null) inode=13033 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=16 name=(null) inode=13029 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=17 name=(null) inode=13034 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=18 name=(null) inode=13026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=19 name=(null) inode=13035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=20 name=(null) inode=13035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=21 name=(null) inode=13036 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=22 name=(null) inode=13035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=23 name=(null) inode=13037 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=24 name=(null) inode=13035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=25 name=(null) inode=13038 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=26 name=(null) inode=13035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=27 name=(null) inode=13039 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=28 name=(null) inode=13035 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=29 name=(null) inode=13040 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=30 name=(null) inode=13026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=31 name=(null) inode=13041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=32 name=(null) inode=13041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=33 name=(null) inode=13042 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=34 name=(null) inode=13041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=35 name=(null) inode=13043 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=36 name=(null) inode=13041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=37 name=(null) inode=13044 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=38 name=(null) inode=13041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=39 name=(null) inode=13045 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=40 name=(null) inode=13041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=41 name=(null) inode=13046 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=42 name=(null) inode=13026 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=43 name=(null) inode=13047 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=44 name=(null) inode=13047 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=45 name=(null) inode=13048 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=46 name=(null) inode=13047 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=47 name=(null) inode=13049 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=48 name=(null) inode=13047 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=49 name=(null) inode=13050 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=50 name=(null) inode=13047 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=51 name=(null) inode=13051 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=52 name=(null) inode=13047 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=53 name=(null) inode=13052 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=54 name=(null) inode=50 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=55 name=(null) inode=13053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=56 name=(null) inode=13053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=57 name=(null) inode=13054 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=58 name=(null) inode=13053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=59 name=(null) inode=13055 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=60 name=(null) inode=13053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=61 name=(null) inode=13056 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=62 name=(null) inode=13056 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=63 name=(null) inode=13057 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=64 name=(null) inode=13056 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=65 name=(null) inode=13058 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=66 name=(null) inode=13056 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=67 name=(null) inode=13059 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=68 name=(null) inode=13056 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=69 name=(null) inode=13060 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=70 name=(null) inode=13056 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=71 name=(null) inode=13061 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=72 name=(null) inode=13053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=73 name=(null) inode=13062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=74 name=(null) inode=13062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=75 name=(null) inode=13063 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=76 name=(null) inode=13062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=77 name=(null) inode=13064 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=78 name=(null) inode=13062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=79 name=(null) inode=13065 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=80 name=(null) inode=13062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=81 name=(null) inode=13066 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=82 name=(null) inode=13062 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=83 name=(null) inode=13067 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=84 name=(null) inode=13053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=85 name=(null) inode=13068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=86 name=(null) inode=13068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=87 name=(null) inode=13069 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=88 name=(null) inode=13068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=89 name=(null) inode=13070 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=90 name=(null) inode=13068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=91 name=(null) inode=13071 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=92 name=(null) inode=13068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=93 name=(null) inode=13072 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=94 name=(null) inode=13068 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=95 name=(null) inode=13073 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=96 name=(null) inode=13053 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=97 name=(null) inode=13074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=98 name=(null) inode=13074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=99 name=(null) inode=13075 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=100 name=(null) inode=13074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=101 name=(null) inode=13076 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=102 name=(null) inode=13074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=103 name=(null) inode=13077 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=104 name=(null) inode=13074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=105 name=(null) inode=13078 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=106 name=(null) inode=13074 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PATH item=107 name=(null) inode=13079 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:44:47.300000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:44:47.321134 systemd-networkd[1085]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 9 19:44:47.325038 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Feb 9 19:44:47.339036 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 9 19:44:47.345058 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:44:47.382552 kernel: kvm: Nested Virtualization enabled Feb 9 19:44:47.382646 kernel: SVM: kvm: Nested Paging enabled Feb 9 19:44:47.382661 kernel: SVM: Virtual VMLOAD VMSAVE supported Feb 9 19:44:47.382674 kernel: SVM: Virtual GIF supported Feb 9 19:44:47.396040 kernel: EDAC MC: Ver: 3.0.0 Feb 9 19:44:47.418359 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:44:47.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:47.420063 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:44:47.426235 lvm[1110]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:44:47.450741 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:44:47.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:47.451477 systemd[1]: Reached target cryptsetup.target. Feb 9 19:44:47.453175 systemd[1]: Starting lvm2-activation.service... Feb 9 19:44:47.456067 lvm[1112]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:44:47.478600 systemd[1]: Finished lvm2-activation.service. Feb 9 19:44:47.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:47.479237 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:44:47.479814 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:44:47.479836 systemd[1]: Reached target local-fs.target. Feb 9 19:44:47.480371 systemd[1]: Reached target machines.target. Feb 9 19:44:47.481824 systemd[1]: Starting ldconfig.service... Feb 9 19:44:47.482582 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:44:47.482634 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:44:47.483480 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:44:47.484994 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:44:47.486802 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:44:47.488104 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:44:47.488132 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:44:47.488951 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:44:47.489898 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1115 (bootctl) Feb 9 19:44:47.491134 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:44:47.492144 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:44:47.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:47.500836 systemd-tmpfiles[1119]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:44:47.501824 systemd-tmpfiles[1119]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:44:47.502932 systemd-tmpfiles[1119]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:44:47.522676 systemd-fsck[1124]: fsck.fat 4.2 (2021-01-31) Feb 9 19:44:47.522676 systemd-fsck[1124]: /dev/vda1: 790 files, 115362/258078 clusters Feb 9 19:44:47.524127 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:44:47.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:47.565661 ldconfig[1114]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:44:47.664060 systemd[1]: Mounting boot.mount... Feb 9 19:44:48.595284 systemd[1]: Mounted boot.mount. Feb 9 19:44:48.605187 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:44:48.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:48.639324 systemd[1]: Finished ldconfig.service. Feb 9 19:44:48.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:48.653919 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:44:48.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:48.656078 systemd[1]: Starting audit-rules.service... Feb 9 19:44:48.657598 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:44:48.659557 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:44:48.661746 systemd[1]: Starting systemd-resolved.service... Feb 9 19:44:48.664840 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:44:48.666329 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:44:48.668278 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:44:48.668979 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:44:48.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:48.670131 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:44:48.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:48.671259 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:44:48.674493 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:44:48.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:48.676498 systemd[1]: Starting systemd-update-done.service... Feb 9 19:44:48.680000 audit[1146]: SYSTEM_BOOT pid=1146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:44:48.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:48.681676 systemd[1]: Finished systemd-update-done.service. Feb 9 19:44:48.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:48.683899 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:44:48.691000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:44:48.691000 audit[1159]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcf25cb2f0 a2=420 a3=0 items=0 ppid=1133 pid=1159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:48.691000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:44:48.691635 augenrules[1159]: No rules Feb 9 19:44:48.692003 systemd[1]: Finished audit-rules.service. Feb 9 19:44:48.727972 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:44:47.948325 systemd-timesyncd[1143]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 9 19:44:47.965884 systemd-journald[1011]: Time jumped backwards, rotating. Feb 9 19:44:47.948369 systemd-timesyncd[1143]: Initial clock synchronization to Fri 2024-02-09 19:44:47.948264 UTC. Feb 9 19:44:47.948413 systemd[1]: Reached target time-set.target. Feb 9 19:44:47.952995 systemd-resolved[1137]: Positive Trust Anchors: Feb 9 19:44:47.953003 systemd-resolved[1137]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:44:47.953028 systemd-resolved[1137]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:44:47.961013 systemd-resolved[1137]: Defaulting to hostname 'linux'. Feb 9 19:44:47.962262 systemd[1]: Started systemd-resolved.service. Feb 9 19:44:47.962947 systemd[1]: Reached target network.target. Feb 9 19:44:47.963498 systemd[1]: Reached target nss-lookup.target. Feb 9 19:44:47.964156 systemd[1]: Reached target sysinit.target. Feb 9 19:44:47.964879 systemd[1]: Started motdgen.path. Feb 9 19:44:47.965491 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:44:47.966360 systemd[1]: Started logrotate.timer. Feb 9 19:44:47.966933 systemd[1]: Started mdadm.timer. Feb 9 19:44:47.967424 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:44:47.968012 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:44:47.968035 systemd[1]: Reached target paths.target. Feb 9 19:44:47.968563 systemd[1]: Reached target timers.target. Feb 9 19:44:47.969399 systemd[1]: Listening on dbus.socket. Feb 9 19:44:47.970915 systemd[1]: Starting docker.socket... Feb 9 19:44:47.972143 systemd[1]: Listening on sshd.socket. Feb 9 19:44:47.972761 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:44:47.973021 systemd[1]: Listening on docker.socket. Feb 9 19:44:47.973576 systemd[1]: Reached target sockets.target. Feb 9 19:44:47.974157 systemd[1]: Reached target basic.target. Feb 9 19:44:47.974798 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:44:47.974836 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:44:47.974853 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:44:47.975781 systemd[1]: Starting containerd.service... Feb 9 19:44:47.977174 systemd[1]: Starting dbus.service... Feb 9 19:44:47.978366 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:44:47.980022 systemd[1]: Starting extend-filesystems.service... Feb 9 19:44:47.980698 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:44:47.981900 jq[1172]: false Feb 9 19:44:47.981522 systemd[1]: Starting motdgen.service... Feb 9 19:44:47.982932 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:44:47.984275 systemd[1]: Starting prepare-critools.service... Feb 9 19:44:47.985772 systemd[1]: Starting prepare-helm.service... Feb 9 19:44:47.987164 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:44:47.989046 systemd[1]: Starting sshd-keygen.service... Feb 9 19:44:47.992320 systemd[1]: Starting systemd-logind.service... Feb 9 19:44:47.993152 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:44:47.993198 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:44:47.994143 systemd[1]: Starting update-engine.service... Feb 9 19:44:47.996568 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:44:47.999584 jq[1195]: true Feb 9 19:44:47.999550 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:44:48.015600 dbus-daemon[1171]: [system] SELinux support is enabled Feb 9 19:44:48.025838 extend-filesystems[1173]: Found sr0 Feb 9 19:44:48.025838 extend-filesystems[1173]: Found vda Feb 9 19:44:48.025838 extend-filesystems[1173]: Found vda1 Feb 9 19:44:48.025838 extend-filesystems[1173]: Found vda2 Feb 9 19:44:48.025838 extend-filesystems[1173]: Found vda3 Feb 9 19:44:48.025838 extend-filesystems[1173]: Found usr Feb 9 19:44:48.025838 extend-filesystems[1173]: Found vda4 Feb 9 19:44:48.025838 extend-filesystems[1173]: Found vda6 Feb 9 19:44:48.025838 extend-filesystems[1173]: Found vda7 Feb 9 19:44:48.025838 extend-filesystems[1173]: Found vda9 Feb 9 19:44:48.025838 extend-filesystems[1173]: Checking size of /dev/vda9 Feb 9 19:44:48.001586 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:44:48.001848 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:44:48.042338 tar[1201]: ./ Feb 9 19:44:48.042338 tar[1201]: ./macvlan Feb 9 19:44:48.002030 systemd[1]: Finished motdgen.service. Feb 9 19:44:48.042647 tar[1202]: crictl Feb 9 19:44:48.007153 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:44:48.042858 tar[1203]: linux-amd64/helm Feb 9 19:44:48.007344 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:44:48.043067 jq[1209]: true Feb 9 19:44:48.015832 systemd[1]: Started dbus.service. Feb 9 19:44:48.017982 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:44:48.018000 systemd[1]: Reached target system-config.target. Feb 9 19:44:48.018667 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:44:48.018678 systemd[1]: Reached target user-config.target. Feb 9 19:44:48.044122 update_engine[1193]: I0209 19:44:48.044020 1193 main.cc:92] Flatcar Update Engine starting Feb 9 19:44:48.045589 systemd[1]: Started update-engine.service. Feb 9 19:44:48.047470 update_engine[1193]: I0209 19:44:48.045625 1193 update_check_scheduler.cc:74] Next update check in 3m12s Feb 9 19:44:48.049616 systemd[1]: Started locksmithd.service. Feb 9 19:44:48.056991 extend-filesystems[1173]: Resized partition /dev/vda9 Feb 9 19:44:48.065663 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 9 19:44:48.055161 systemd-logind[1192]: Watching system buttons on /dev/input/event1 (Power Button) Feb 9 19:44:48.066073 env[1210]: time="2024-02-09T19:44:48.062286944Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:44:48.066217 bash[1232]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:44:48.066276 extend-filesystems[1239]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 19:44:48.055176 systemd-logind[1192]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:44:48.055856 systemd-logind[1192]: New seat seat0. Feb 9 19:44:48.057881 systemd[1]: Started systemd-logind.service. Feb 9 19:44:48.063288 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:44:48.076410 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 9 19:44:48.094811 extend-filesystems[1239]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 9 19:44:48.094811 extend-filesystems[1239]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 19:44:48.094811 extend-filesystems[1239]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 9 19:44:48.094248 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:44:48.098490 extend-filesystems[1173]: Resized filesystem in /dev/vda9 Feb 9 19:44:48.094472 systemd[1]: Finished extend-filesystems.service. Feb 9 19:44:48.099156 env[1210]: time="2024-02-09T19:44:48.098566789Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:44:48.099156 env[1210]: time="2024-02-09T19:44:48.098735956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:44:48.099845 env[1210]: time="2024-02-09T19:44:48.099811973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:44:48.099845 env[1210]: time="2024-02-09T19:44:48.099843182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:44:48.100110 env[1210]: time="2024-02-09T19:44:48.100084003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:44:48.100110 env[1210]: time="2024-02-09T19:44:48.100105393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:44:48.100177 env[1210]: time="2024-02-09T19:44:48.100117055Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:44:48.100177 env[1210]: time="2024-02-09T19:44:48.100125601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:44:48.100214 env[1210]: time="2024-02-09T19:44:48.100185063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:44:48.100410 env[1210]: time="2024-02-09T19:44:48.100371613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:44:48.100553 env[1210]: time="2024-02-09T19:44:48.100527034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:44:48.100553 env[1210]: time="2024-02-09T19:44:48.100548264Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:44:48.100616 env[1210]: time="2024-02-09T19:44:48.100589371Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:44:48.100616 env[1210]: time="2024-02-09T19:44:48.100599029Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:44:48.104341 env[1210]: time="2024-02-09T19:44:48.104312682Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:44:48.104382 env[1210]: time="2024-02-09T19:44:48.104348870Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:44:48.104382 env[1210]: time="2024-02-09T19:44:48.104362425Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:44:48.104441 env[1210]: time="2024-02-09T19:44:48.104433459Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:44:48.104461 env[1210]: time="2024-02-09T19:44:48.104447815Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:44:48.104481 env[1210]: time="2024-02-09T19:44:48.104468254Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:44:48.104500 env[1210]: time="2024-02-09T19:44:48.104480747Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:44:48.104500 env[1210]: time="2024-02-09T19:44:48.104493742Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:44:48.104548 env[1210]: time="2024-02-09T19:44:48.104504712Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:44:48.104548 env[1210]: time="2024-02-09T19:44:48.104517276Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:44:48.104548 env[1210]: time="2024-02-09T19:44:48.104528647Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:44:48.104548 env[1210]: time="2024-02-09T19:44:48.104539908Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:44:48.104621 env[1210]: time="2024-02-09T19:44:48.104614268Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:44:48.104699 env[1210]: time="2024-02-09T19:44:48.104675422Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:44:48.104985 env[1210]: time="2024-02-09T19:44:48.104962691Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:44:48.105026 env[1210]: time="2024-02-09T19:44:48.104991254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:44:48.105026 env[1210]: time="2024-02-09T19:44:48.105004239Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:44:48.105072 env[1210]: time="2024-02-09T19:44:48.105041278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:44:48.105072 env[1210]: time="2024-02-09T19:44:48.105053060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:44:48.105072 env[1210]: time="2024-02-09T19:44:48.105064772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:44:48.105125 env[1210]: time="2024-02-09T19:44:48.105074861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:44:48.105125 env[1210]: time="2024-02-09T19:44:48.105086403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:44:48.105125 env[1210]: time="2024-02-09T19:44:48.105098756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:44:48.105125 env[1210]: time="2024-02-09T19:44:48.105109376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:44:48.105125 env[1210]: time="2024-02-09T19:44:48.105119685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:44:48.105217 env[1210]: time="2024-02-09T19:44:48.105131758Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:44:48.105238 env[1210]: time="2024-02-09T19:44:48.105228670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:44:48.105259 env[1210]: time="2024-02-09T19:44:48.105242505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:44:48.105259 env[1210]: time="2024-02-09T19:44:48.105254117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:44:48.105298 env[1210]: time="2024-02-09T19:44:48.105267082Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:44:48.105298 env[1210]: time="2024-02-09T19:44:48.105280537Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:44:48.105298 env[1210]: time="2024-02-09T19:44:48.105290686Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:44:48.105352 env[1210]: time="2024-02-09T19:44:48.105308860Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:44:48.105352 env[1210]: time="2024-02-09T19:44:48.105340058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:44:48.105586 env[1210]: time="2024-02-09T19:44:48.105532650Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:44:48.105586 env[1210]: time="2024-02-09T19:44:48.105586370Z" level=info msg="Connect containerd service" Feb 9 19:44:48.106174 env[1210]: time="2024-02-09T19:44:48.105616166Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:44:48.106917 env[1210]: time="2024-02-09T19:44:48.106480527Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:44:48.106917 env[1210]: time="2024-02-09T19:44:48.106726227Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:44:48.106917 env[1210]: time="2024-02-09T19:44:48.106788965Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:44:48.106917 env[1210]: time="2024-02-09T19:44:48.106838608Z" level=info msg="containerd successfully booted in 0.064919s" Feb 9 19:44:48.106895 systemd[1]: Started containerd.service. Feb 9 19:44:48.109293 tar[1201]: ./static Feb 9 19:44:48.113450 env[1210]: time="2024-02-09T19:44:48.111469651Z" level=info msg="Start subscribing containerd event" Feb 9 19:44:48.113450 env[1210]: time="2024-02-09T19:44:48.111572695Z" level=info msg="Start recovering state" Feb 9 19:44:48.113450 env[1210]: time="2024-02-09T19:44:48.111648817Z" level=info msg="Start event monitor" Feb 9 19:44:48.113450 env[1210]: time="2024-02-09T19:44:48.111671129Z" level=info msg="Start snapshots syncer" Feb 9 19:44:48.113450 env[1210]: time="2024-02-09T19:44:48.111683733Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:44:48.113450 env[1210]: time="2024-02-09T19:44:48.111690556Z" level=info msg="Start streaming server" Feb 9 19:44:48.131705 tar[1201]: ./vlan Feb 9 19:44:48.161938 tar[1201]: ./portmap Feb 9 19:44:48.190353 tar[1201]: ./host-local Feb 9 19:44:48.196496 systemd-networkd[1085]: eth0: Gained IPv6LL Feb 9 19:44:48.215785 tar[1201]: ./vrf Feb 9 19:44:48.242990 tar[1201]: ./bridge Feb 9 19:44:48.275794 tar[1201]: ./tuning Feb 9 19:44:48.302150 tar[1201]: ./firewall Feb 9 19:44:48.336294 tar[1201]: ./host-device Feb 9 19:44:48.365887 tar[1201]: ./sbr Feb 9 19:44:48.394611 tar[1201]: ./loopback Feb 9 19:44:48.422368 tar[1201]: ./dhcp Feb 9 19:44:48.425490 locksmithd[1236]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:44:48.432431 tar[1203]: linux-amd64/LICENSE Feb 9 19:44:48.432505 tar[1203]: linux-amd64/README.md Feb 9 19:44:48.436232 systemd[1]: Finished prepare-helm.service. Feb 9 19:44:48.466075 systemd[1]: Finished prepare-critools.service. Feb 9 19:44:48.496307 tar[1201]: ./ptp Feb 9 19:44:48.523957 tar[1201]: ./ipvlan Feb 9 19:44:48.551414 tar[1201]: ./bandwidth Feb 9 19:44:48.586286 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:44:48.892543 sshd_keygen[1204]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:44:48.909776 systemd[1]: Finished sshd-keygen.service. Feb 9 19:44:48.911815 systemd[1]: Starting issuegen.service... Feb 9 19:44:48.916480 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:44:48.916669 systemd[1]: Finished issuegen.service. Feb 9 19:44:48.918505 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:44:48.923831 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:44:48.925646 systemd[1]: Started getty@tty1.service. Feb 9 19:44:48.927084 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:44:48.927890 systemd[1]: Reached target getty.target. Feb 9 19:44:48.928571 systemd[1]: Reached target multi-user.target. Feb 9 19:44:48.930208 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:44:48.935594 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:44:48.935887 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:44:48.936710 systemd[1]: Startup finished in 5.804s (kernel) + 5.752s (userspace) = 11.557s. Feb 9 19:44:53.018721 systemd[1]: Created slice system-sshd.slice. Feb 9 19:44:53.019846 systemd[1]: Started sshd@0-10.0.0.60:22-10.0.0.1:43364.service. Feb 9 19:44:53.059895 sshd[1280]: Accepted publickey for core from 10.0.0.1 port 43364 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:53.061036 sshd[1280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:53.068693 systemd-logind[1192]: New session 1 of user core. Feb 9 19:44:53.069527 systemd[1]: Created slice user-500.slice. Feb 9 19:44:53.070377 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:44:53.077065 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:44:53.078007 systemd[1]: Starting user@500.service... Feb 9 19:44:53.080531 (systemd)[1285]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:53.143167 systemd[1285]: Queued start job for default target default.target. Feb 9 19:44:53.143405 systemd[1285]: Reached target paths.target. Feb 9 19:44:53.143421 systemd[1285]: Reached target sockets.target. Feb 9 19:44:53.143432 systemd[1285]: Reached target timers.target. Feb 9 19:44:53.143441 systemd[1285]: Reached target basic.target. Feb 9 19:44:53.143476 systemd[1285]: Reached target default.target. Feb 9 19:44:53.143494 systemd[1285]: Startup finished in 58ms. Feb 9 19:44:53.143583 systemd[1]: Started user@500.service. Feb 9 19:44:53.144423 systemd[1]: Started session-1.scope. Feb 9 19:44:53.194608 systemd[1]: Started sshd@1-10.0.0.60:22-10.0.0.1:43366.service. Feb 9 19:44:53.231781 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 43366 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:53.232776 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:53.236557 systemd-logind[1192]: New session 2 of user core. Feb 9 19:44:53.237568 systemd[1]: Started session-2.scope. Feb 9 19:44:53.290741 sshd[1294]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:53.293520 systemd[1]: Started sshd@2-10.0.0.60:22-10.0.0.1:43374.service. Feb 9 19:44:53.293990 systemd[1]: sshd@1-10.0.0.60:22-10.0.0.1:43366.service: Deactivated successfully. Feb 9 19:44:53.294972 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 19:44:53.295110 systemd-logind[1192]: Session 2 logged out. Waiting for processes to exit. Feb 9 19:44:53.296306 systemd-logind[1192]: Removed session 2. Feb 9 19:44:53.331047 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 43374 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:53.332030 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:53.335482 systemd-logind[1192]: New session 3 of user core. Feb 9 19:44:53.336117 systemd[1]: Started session-3.scope. Feb 9 19:44:53.385228 sshd[1300]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:53.387263 systemd[1]: Started sshd@3-10.0.0.60:22-10.0.0.1:43378.service. Feb 9 19:44:53.388086 systemd[1]: sshd@2-10.0.0.60:22-10.0.0.1:43374.service: Deactivated successfully. Feb 9 19:44:53.388791 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 19:44:53.388844 systemd-logind[1192]: Session 3 logged out. Waiting for processes to exit. Feb 9 19:44:53.389729 systemd-logind[1192]: Removed session 3. Feb 9 19:44:53.425607 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 43378 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:53.426528 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:53.429657 systemd-logind[1192]: New session 4 of user core. Feb 9 19:44:53.430313 systemd[1]: Started session-4.scope. Feb 9 19:44:53.481529 sshd[1306]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:53.483550 systemd[1]: Started sshd@4-10.0.0.60:22-10.0.0.1:43382.service. Feb 9 19:44:53.483966 systemd[1]: sshd@3-10.0.0.60:22-10.0.0.1:43378.service: Deactivated successfully. Feb 9 19:44:53.484721 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:44:53.484868 systemd-logind[1192]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:44:53.485959 systemd-logind[1192]: Removed session 4. Feb 9 19:44:53.519529 sshd[1313]: Accepted publickey for core from 10.0.0.1 port 43382 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:53.520661 sshd[1313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:53.523603 systemd-logind[1192]: New session 5 of user core. Feb 9 19:44:53.524296 systemd[1]: Started session-5.scope. Feb 9 19:44:53.577721 sudo[1319]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 19:44:53.577891 sudo[1319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:44:53.585580 dbus-daemon[1171]: \xd0\u001d\xfc\xd5\U: received setenforce notice (enforcing=-851066496) Feb 9 19:44:53.587720 sudo[1319]: pam_unix(sudo:session): session closed for user root Feb 9 19:44:53.589151 sshd[1313]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:53.591259 systemd[1]: Started sshd@5-10.0.0.60:22-10.0.0.1:43386.service. Feb 9 19:44:53.591712 systemd[1]: sshd@4-10.0.0.60:22-10.0.0.1:43382.service: Deactivated successfully. Feb 9 19:44:53.592340 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:44:53.593032 systemd-logind[1192]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:44:53.593797 systemd-logind[1192]: Removed session 5. Feb 9 19:44:53.627809 sshd[1321]: Accepted publickey for core from 10.0.0.1 port 43386 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:53.628752 sshd[1321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:53.631650 systemd-logind[1192]: New session 6 of user core. Feb 9 19:44:53.632283 systemd[1]: Started session-6.scope. Feb 9 19:44:53.682966 sudo[1328]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 19:44:53.683136 sudo[1328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:44:53.685149 sudo[1328]: pam_unix(sudo:session): session closed for user root Feb 9 19:44:53.688923 sudo[1327]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 19:44:53.689079 sudo[1327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:44:53.696521 systemd[1]: Stopping audit-rules.service... Feb 9 19:44:53.696000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:44:53.697486 auditctl[1331]: No rules Feb 9 19:44:53.700169 kernel: kauditd_printk_skb: 208 callbacks suppressed Feb 9 19:44:53.700208 kernel: audit: type=1305 audit(1707507893.696:132): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 19:44:53.700223 kernel: audit: type=1300 audit(1707507893.696:132): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe0078cd80 a2=420 a3=0 items=0 ppid=1 pid=1331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:53.696000 audit[1331]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe0078cd80 a2=420 a3=0 items=0 ppid=1 pid=1331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:53.698164 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 19:44:53.698414 systemd[1]: Stopped audit-rules.service. Feb 9 19:44:53.699832 systemd[1]: Starting audit-rules.service... Feb 9 19:44:53.702175 kernel: audit: type=1327 audit(1707507893.696:132): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:44:53.696000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 19:44:53.703075 kernel: audit: type=1131 audit(1707507893.697:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:53.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:53.714164 augenrules[1349]: No rules Feb 9 19:44:53.714718 systemd[1]: Finished audit-rules.service. Feb 9 19:44:53.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:53.715502 sudo[1327]: pam_unix(sudo:session): session closed for user root Feb 9 19:44:53.715000 audit[1327]: USER_END pid=1327 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:44:53.718130 sshd[1321]: pam_unix(sshd:session): session closed for user core Feb 9 19:44:53.719943 systemd[1]: sshd@5-10.0.0.60:22-10.0.0.1:43386.service: Deactivated successfully. Feb 9 19:44:53.720518 kernel: audit: type=1130 audit(1707507893.714:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:53.720543 kernel: audit: type=1106 audit(1707507893.715:135): pid=1327 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:44:53.715000 audit[1327]: CRED_DISP pid=1327 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:44:53.722562 systemd[1]: Started sshd@6-10.0.0.60:22-10.0.0.1:43388.service. Feb 9 19:44:53.725405 kernel: audit: type=1104 audit(1707507893.715:136): pid=1327 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:44:53.725438 kernel: audit: type=1106 audit(1707507893.718:137): pid=1321 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:44:53.718000 audit[1321]: USER_END pid=1321 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:44:53.724610 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:44:53.725034 systemd-logind[1192]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:44:53.727409 kernel: audit: type=1104 audit(1707507893.718:138): pid=1321 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:44:53.718000 audit[1321]: CRED_DISP pid=1321 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:44:53.726553 systemd-logind[1192]: Removed session 6. Feb 9 19:44:53.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.60:22-10.0.0.1:43386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:53.730978 kernel: audit: type=1131 audit(1707507893.719:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.60:22-10.0.0.1:43386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:53.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.60:22-10.0.0.1:43388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:53.759000 audit[1355]: USER_ACCT pid=1355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:44:53.759722 sshd[1355]: Accepted publickey for core from 10.0.0.1 port 43388 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:44:53.760000 audit[1355]: CRED_ACQ pid=1355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:44:53.760000 audit[1355]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6cc1c610 a2=3 a3=0 items=0 ppid=1 pid=1355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:53.760000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:44:53.760706 sshd[1355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:44:53.763806 systemd-logind[1192]: New session 7 of user core. Feb 9 19:44:53.764503 systemd[1]: Started session-7.scope. Feb 9 19:44:53.767000 audit[1355]: USER_START pid=1355 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:44:53.769000 audit[1359]: CRED_ACQ pid=1359 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:44:53.814000 audit[1360]: USER_ACCT pid=1360 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:44:53.815167 sudo[1360]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:44:53.814000 audit[1360]: CRED_REFR pid=1360 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:44:53.815344 sudo[1360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:44:53.816000 audit[1360]: USER_START pid=1360 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:44:54.360165 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:44:54.366485 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:44:54.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:54.366736 systemd[1]: Reached target network-online.target. Feb 9 19:44:54.367807 systemd[1]: Starting docker.service... Feb 9 19:44:54.397568 env[1379]: time="2024-02-09T19:44:54.397518652Z" level=info msg="Starting up" Feb 9 19:44:54.398980 env[1379]: time="2024-02-09T19:44:54.398941550Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:44:54.399043 env[1379]: time="2024-02-09T19:44:54.398978619Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:44:54.399043 env[1379]: time="2024-02-09T19:44:54.399006401Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:44:54.399043 env[1379]: time="2024-02-09T19:44:54.399018113Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:44:54.400771 env[1379]: time="2024-02-09T19:44:54.400743909Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:44:54.400771 env[1379]: time="2024-02-09T19:44:54.400769036Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:44:54.400830 env[1379]: time="2024-02-09T19:44:54.400786569Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:44:54.400830 env[1379]: time="2024-02-09T19:44:54.400798251Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:44:55.093476 env[1379]: time="2024-02-09T19:44:55.093429094Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 19:44:55.093476 env[1379]: time="2024-02-09T19:44:55.093455443Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 19:44:55.093671 env[1379]: time="2024-02-09T19:44:55.093599453Z" level=info msg="Loading containers: start." Feb 9 19:44:55.128000 audit[1413]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1413 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.128000 audit[1413]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fff57e97e10 a2=0 a3=7fff57e97dfc items=0 ppid=1379 pid=1413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.128000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 19:44:55.130000 audit[1415]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1415 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.130000 audit[1415]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdd3cd7c20 a2=0 a3=7ffdd3cd7c0c items=0 ppid=1379 pid=1415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.130000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 19:44:55.131000 audit[1417]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.131000 audit[1417]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd26c2bb50 a2=0 a3=7ffd26c2bb3c items=0 ppid=1379 pid=1417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.131000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:44:55.133000 audit[1419]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.133000 audit[1419]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff4a1fa180 a2=0 a3=7fff4a1fa16c items=0 ppid=1379 pid=1419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.133000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:44:55.134000 audit[1421]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1421 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.134000 audit[1421]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff8ef8a440 a2=0 a3=7fff8ef8a42c items=0 ppid=1379 pid=1421 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.134000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 9 19:44:55.146000 audit[1426]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1426 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.146000 audit[1426]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdda760420 a2=0 a3=7ffdda76040c items=0 ppid=1379 pid=1426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.146000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 9 19:44:55.155000 audit[1428]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.155000 audit[1428]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd5c1526c0 a2=0 a3=7ffd5c1526ac items=0 ppid=1379 pid=1428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.155000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 9 19:44:55.156000 audit[1430]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.156000 audit[1430]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffa65fb580 a2=0 a3=7fffa65fb56c items=0 ppid=1379 pid=1430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.156000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 9 19:44:55.158000 audit[1432]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.158000 audit[1432]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc6103b5f0 a2=0 a3=7ffc6103b5dc items=0 ppid=1379 pid=1432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.158000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:44:55.166000 audit[1436]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.166000 audit[1436]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffeefdbb1c0 a2=0 a3=7ffeefdbb1ac items=0 ppid=1379 pid=1436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.166000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:44:55.167000 audit[1437]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.167000 audit[1437]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffdd91d0b70 a2=0 a3=7ffdd91d0b5c items=0 ppid=1379 pid=1437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.167000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:44:55.174432 kernel: Initializing XFRM netlink socket Feb 9 19:44:55.199420 env[1379]: time="2024-02-09T19:44:55.199372998Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:44:55.213000 audit[1445]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.213000 audit[1445]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd91211900 a2=0 a3=7ffd912118ec items=0 ppid=1379 pid=1445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.213000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 9 19:44:55.224000 audit[1448]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.224000 audit[1448]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc4edeaab0 a2=0 a3=7ffc4edeaa9c items=0 ppid=1379 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.224000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 9 19:44:55.226000 audit[1451]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.226000 audit[1451]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd5ad33130 a2=0 a3=7ffd5ad3311c items=0 ppid=1379 pid=1451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.226000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 9 19:44:55.228000 audit[1453]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.228000 audit[1453]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff1f02d8e0 a2=0 a3=7fff1f02d8cc items=0 ppid=1379 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.228000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 9 19:44:55.229000 audit[1455]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.229000 audit[1455]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff1bc096b0 a2=0 a3=7fff1bc0969c items=0 ppid=1379 pid=1455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.229000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 9 19:44:55.231000 audit[1457]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.231000 audit[1457]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fffe793fc90 a2=0 a3=7fffe793fc7c items=0 ppid=1379 pid=1457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.231000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 9 19:44:55.232000 audit[1459]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.232000 audit[1459]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc28351b00 a2=0 a3=7ffc28351aec items=0 ppid=1379 pid=1459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.232000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 9 19:44:55.238000 audit[1462]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.238000 audit[1462]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe514c2ba0 a2=0 a3=7ffe514c2b8c items=0 ppid=1379 pid=1462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.238000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 9 19:44:55.240000 audit[1464]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.240000 audit[1464]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffdb58a0d00 a2=0 a3=7ffdb58a0cec items=0 ppid=1379 pid=1464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.240000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 19:44:55.241000 audit[1466]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.241000 audit[1466]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe733cc380 a2=0 a3=7ffe733cc36c items=0 ppid=1379 pid=1466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.241000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 19:44:55.243000 audit[1468]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.243000 audit[1468]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd0acb22f0 a2=0 a3=7ffd0acb22dc items=0 ppid=1379 pid=1468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.243000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 9 19:44:55.244176 systemd-networkd[1085]: docker0: Link UP Feb 9 19:44:55.251000 audit[1472]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.251000 audit[1472]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffda94a25f0 a2=0 a3=7ffda94a25dc items=0 ppid=1379 pid=1472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.251000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:44:55.252000 audit[1473]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:44:55.252000 audit[1473]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc1f8a2c60 a2=0 a3=7ffc1f8a2c4c items=0 ppid=1379 pid=1473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:44:55.252000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 19:44:55.252879 env[1379]: time="2024-02-09T19:44:55.252849827Z" level=info msg="Loading containers: done." Feb 9 19:44:55.262258 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3491957648-merged.mount: Deactivated successfully. Feb 9 19:44:55.264954 env[1379]: time="2024-02-09T19:44:55.264926912Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:44:55.265064 env[1379]: time="2024-02-09T19:44:55.265044572Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:44:55.265145 env[1379]: time="2024-02-09T19:44:55.265128550Z" level=info msg="Daemon has completed initialization" Feb 9 19:44:55.280467 systemd[1]: Started docker.service. Feb 9 19:44:55.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:55.286963 env[1379]: time="2024-02-09T19:44:55.286906844Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:44:55.303066 systemd[1]: Reloading. Feb 9 19:44:55.366642 /usr/lib/systemd/system-generators/torcx-generator[1522]: time="2024-02-09T19:44:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:44:55.366669 /usr/lib/systemd/system-generators/torcx-generator[1522]: time="2024-02-09T19:44:55Z" level=info msg="torcx already run" Feb 9 19:44:55.429017 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:44:55.429033 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:44:55.448104 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:44:55.509146 systemd[1]: Started kubelet.service. Feb 9 19:44:55.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:44:55.559026 kubelet[1568]: E0209 19:44:55.558959 1568 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:44:55.561007 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:44:55.561208 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:44:55.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:44:55.923559 env[1210]: time="2024-02-09T19:44:55.923512379Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 19:44:56.623955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1520016767.mount: Deactivated successfully. Feb 9 19:44:58.476980 env[1210]: time="2024-02-09T19:44:58.476923925Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:58.479431 env[1210]: time="2024-02-09T19:44:58.479367708Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:58.481174 env[1210]: time="2024-02-09T19:44:58.481129781Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:58.483120 env[1210]: time="2024-02-09T19:44:58.483085819Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:44:58.483741 env[1210]: time="2024-02-09T19:44:58.483710330Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 9 19:44:58.495357 env[1210]: time="2024-02-09T19:44:58.495310029Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 19:45:00.807868 env[1210]: time="2024-02-09T19:45:00.807800468Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:00.809934 env[1210]: time="2024-02-09T19:45:00.809895406Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:00.812882 env[1210]: time="2024-02-09T19:45:00.812854114Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:00.814901 env[1210]: time="2024-02-09T19:45:00.814874422Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:00.815698 env[1210]: time="2024-02-09T19:45:00.815642873Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 9 19:45:00.825771 env[1210]: time="2024-02-09T19:45:00.825739223Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 19:45:02.041821 env[1210]: time="2024-02-09T19:45:02.041758119Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:02.044182 env[1210]: time="2024-02-09T19:45:02.044161455Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:02.046873 env[1210]: time="2024-02-09T19:45:02.046838394Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:02.048609 env[1210]: time="2024-02-09T19:45:02.048583697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:02.049343 env[1210]: time="2024-02-09T19:45:02.049297906Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 9 19:45:02.057031 env[1210]: time="2024-02-09T19:45:02.056993675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:45:03.892728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3901697408.mount: Deactivated successfully. Feb 9 19:45:04.332024 env[1210]: time="2024-02-09T19:45:04.331894484Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:04.334936 env[1210]: time="2024-02-09T19:45:04.334873640Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:04.337782 env[1210]: time="2024-02-09T19:45:04.337730616Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:04.339249 env[1210]: time="2024-02-09T19:45:04.339209048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:04.339737 env[1210]: time="2024-02-09T19:45:04.339707403Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:45:04.349321 env[1210]: time="2024-02-09T19:45:04.349291583Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:45:04.975676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380867495.mount: Deactivated successfully. Feb 9 19:45:04.980814 env[1210]: time="2024-02-09T19:45:04.980770987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:04.982453 env[1210]: time="2024-02-09T19:45:04.982418256Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:04.983928 env[1210]: time="2024-02-09T19:45:04.983882862Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:04.985085 env[1210]: time="2024-02-09T19:45:04.985046734Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:04.985490 env[1210]: time="2024-02-09T19:45:04.985447465Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:45:04.994998 env[1210]: time="2024-02-09T19:45:04.994964560Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 19:45:05.646925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:45:05.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:05.647091 systemd[1]: Stopped kubelet.service. Feb 9 19:45:05.648447 systemd[1]: Started kubelet.service. Feb 9 19:45:05.649403 kernel: kauditd_printk_skb: 87 callbacks suppressed Feb 9 19:45:05.649487 kernel: audit: type=1130 audit(1707507905.646:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:05.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:05.657910 kernel: audit: type=1131 audit(1707507905.646:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:05.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:05.662429 kernel: audit: type=1130 audit(1707507905.648:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:05.719632 kubelet[1629]: E0209 19:45:05.719568 1629 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:45:05.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:45:05.724067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:45:05.724250 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:45:05.731421 kernel: audit: type=1131 audit(1707507905.724:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 19:45:05.823533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1710529615.mount: Deactivated successfully. Feb 9 19:45:11.545064 env[1210]: time="2024-02-09T19:45:11.544991679Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:11.625426 env[1210]: time="2024-02-09T19:45:11.625370319Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:11.627279 env[1210]: time="2024-02-09T19:45:11.627246537Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:11.628954 env[1210]: time="2024-02-09T19:45:11.628913442Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:11.629441 env[1210]: time="2024-02-09T19:45:11.629414602Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 9 19:45:11.637946 env[1210]: time="2024-02-09T19:45:11.637902487Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 19:45:12.256599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount494052646.mount: Deactivated successfully. Feb 9 19:45:12.785003 env[1210]: time="2024-02-09T19:45:12.784956232Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:12.786686 env[1210]: time="2024-02-09T19:45:12.786660177Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:12.788034 env[1210]: time="2024-02-09T19:45:12.787994529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:12.789233 env[1210]: time="2024-02-09T19:45:12.789196934Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:12.789758 env[1210]: time="2024-02-09T19:45:12.789730223Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 9 19:45:14.742300 systemd[1]: Stopped kubelet.service. Feb 9 19:45:14.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:14.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:14.746779 kernel: audit: type=1130 audit(1707507914.742:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:14.746843 kernel: audit: type=1131 audit(1707507914.742:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:14.755437 systemd[1]: Reloading. Feb 9 19:45:14.825803 /usr/lib/systemd/system-generators/torcx-generator[1734]: time="2024-02-09T19:45:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:45:14.825839 /usr/lib/systemd/system-generators/torcx-generator[1734]: time="2024-02-09T19:45:14Z" level=info msg="torcx already run" Feb 9 19:45:14.883912 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:45:14.883931 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:45:14.902664 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:45:14.971737 systemd[1]: Started kubelet.service. Feb 9 19:45:14.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:14.974408 kernel: audit: type=1130 audit(1707507914.971:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:15.016600 kubelet[1781]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:45:15.016600 kubelet[1781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:45:15.017285 kubelet[1781]: I0209 19:45:15.017234 1781 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:45:15.019184 kubelet[1781]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:45:15.019184 kubelet[1781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:45:15.220682 kubelet[1781]: I0209 19:45:15.220642 1781 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:45:15.220682 kubelet[1781]: I0209 19:45:15.220667 1781 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:45:15.220893 kubelet[1781]: I0209 19:45:15.220879 1781 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:45:15.223071 kubelet[1781]: I0209 19:45:15.223041 1781 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:45:15.223791 kubelet[1781]: E0209 19:45:15.223773 1781 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:15.227912 kubelet[1781]: I0209 19:45:15.227886 1781 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:45:15.228262 kubelet[1781]: I0209 19:45:15.228234 1781 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:45:15.228316 kubelet[1781]: I0209 19:45:15.228303 1781 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:45:15.228439 kubelet[1781]: I0209 19:45:15.228322 1781 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:45:15.228439 kubelet[1781]: I0209 19:45:15.228333 1781 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:45:15.228525 kubelet[1781]: I0209 19:45:15.228482 1781 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:45:15.232966 kubelet[1781]: I0209 19:45:15.232913 1781 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:45:15.232966 kubelet[1781]: I0209 19:45:15.232950 1781 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:45:15.232966 kubelet[1781]: I0209 19:45:15.232975 1781 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:45:15.233198 kubelet[1781]: I0209 19:45:15.232992 1781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:45:15.233887 kubelet[1781]: W0209 19:45:15.233833 1781 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:15.233945 kubelet[1781]: E0209 19:45:15.233893 1781 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:15.233985 kubelet[1781]: W0209 19:45:15.233957 1781 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:15.234014 kubelet[1781]: E0209 19:45:15.233991 1781 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:15.234090 kubelet[1781]: I0209 19:45:15.234066 1781 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:45:15.234735 kubelet[1781]: W0209 19:45:15.234717 1781 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:45:15.235563 kubelet[1781]: I0209 19:45:15.235542 1781 server.go:1186] "Started kubelet" Feb 9 19:45:15.235669 kubelet[1781]: I0209 19:45:15.235643 1781 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:45:15.236527 kubelet[1781]: I0209 19:45:15.236504 1781 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:45:15.236000 audit[1781]: AVC avc: denied { mac_admin } for pid=1781 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:45:15.236000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:45:15.240663 kubelet[1781]: I0209 19:45:15.237330 1781 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:45:15.240663 kubelet[1781]: I0209 19:45:15.237357 1781 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:45:15.240663 kubelet[1781]: I0209 19:45:15.237426 1781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:45:15.240663 kubelet[1781]: E0209 19:45:15.237680 1781 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:45:15.240663 kubelet[1781]: E0209 19:45:15.237701 1781 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:45:15.240844 kubelet[1781]: E0209 19:45:15.237908 1781 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17b24966f2c7c50f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 45, 15, 235509519, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 45, 15, 235509519, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.0.0.60:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.60:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:45:15.240844 kubelet[1781]: I0209 19:45:15.238083 1781 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:45:15.240844 kubelet[1781]: E0209 19:45:15.238593 1781 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 9 19:45:15.240844 kubelet[1781]: W0209 19:45:15.238879 1781 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:15.241041 kubelet[1781]: E0209 19:45:15.238913 1781 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:15.241041 kubelet[1781]: I0209 19:45:15.238927 1781 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:45:15.241041 kubelet[1781]: E0209 19:45:15.239079 1781 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:15.241417 kernel: audit: type=1400 audit(1707507915.236:184): avc: denied { mac_admin } for pid=1781 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:45:15.241474 kernel: audit: type=1401 audit(1707507915.236:184): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:45:15.241497 kernel: audit: type=1300 audit(1707507915.236:184): arch=c000003e syscall=188 success=no exit=-22 a0=c000c5eb10 a1=c0000fd788 a2=c000c5eae0 a3=25 items=0 ppid=1 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.236000 audit[1781]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c5eb10 a1=c0000fd788 a2=c000c5eae0 a3=25 items=0 ppid=1 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.251457 kernel: audit: type=1327 audit(1707507915.236:184): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:45:15.251600 kernel: audit: type=1400 audit(1707507915.236:185): avc: denied { mac_admin } for pid=1781 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:45:15.236000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:45:15.236000 audit[1781]: AVC avc: denied { mac_admin } for pid=1781 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:45:15.236000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:45:15.253109 kernel: audit: type=1401 audit(1707507915.236:185): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:45:15.253141 kernel: audit: type=1300 audit(1707507915.236:185): arch=c000003e syscall=188 success=no exit=-22 a0=c000c56560 a1=c0000fd7a0 a2=c000c5eba0 a3=25 items=0 ppid=1 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.236000 audit[1781]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c56560 a1=c0000fd7a0 a2=c000c5eba0 a3=25 items=0 ppid=1 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.236000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:45:15.239000 audit[1793]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1793 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.239000 audit[1793]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffee544e7d0 a2=0 a3=7ffee544e7bc items=0 ppid=1781 pid=1793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.239000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:45:15.240000 audit[1794]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1794 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.240000 audit[1794]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff89377270 a2=0 a3=7fff8937725c items=0 ppid=1781 pid=1794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.240000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:45:15.242000 audit[1796]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1796 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.242000 audit[1796]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff3a3f8fa0 a2=0 a3=7fff3a3f8f8c items=0 ppid=1781 pid=1796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:45:15.244000 audit[1798]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1798 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.244000 audit[1798]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcce17cdd0 a2=0 a3=7ffcce17cdbc items=0 ppid=1781 pid=1798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.244000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 19:45:15.253000 audit[1801]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1801 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.253000 audit[1801]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff58ce7fe0 a2=0 a3=7fff58ce7fcc items=0 ppid=1781 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.253000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 19:45:15.254000 audit[1803]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.254000 audit[1803]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff81f29980 a2=0 a3=7fff81f2996c items=0 ppid=1781 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.254000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:45:15.259000 audit[1810]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=1810 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.259000 audit[1810]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffeb717acc0 a2=0 a3=7ffeb717acac items=0 ppid=1781 pid=1810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:45:15.263000 audit[1813]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=1813 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.263000 audit[1813]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffcd0759940 a2=0 a3=7ffcd075992c items=0 ppid=1781 pid=1813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:45:15.264000 audit[1814]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1814 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.264000 audit[1814]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffc09fa760 a2=0 a3=7fffc09fa74c items=0 ppid=1781 pid=1814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:45:15.265000 audit[1815]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1815 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.265000 audit[1815]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe66b63240 a2=0 a3=7ffe66b6322c items=0 ppid=1781 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.265000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:45:15.267000 audit[1817]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=1817 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.267000 audit[1817]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe1811b9a0 a2=0 a3=7ffe1811b98c items=0 ppid=1781 pid=1817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:45:15.270000 audit[1819]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=1819 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.270000 audit[1819]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff9d4e1ed0 a2=0 a3=7fff9d4e1ebc items=0 ppid=1781 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.270000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:45:15.271000 audit[1821]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=1821 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.271000 audit[1821]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7fff08f01d40 a2=0 a3=7fff08f01d2c items=0 ppid=1781 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.271000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:45:15.273000 audit[1823]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=1823 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.273000 audit[1823]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffd709b7040 a2=0 a3=7ffd709b702c items=0 ppid=1781 pid=1823 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.273000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:45:15.274000 audit[1825]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=1825 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.274000 audit[1825]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffcd5ae3130 a2=0 a3=7ffcd5ae311c items=0 ppid=1781 pid=1825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.274000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:45:15.275576 kubelet[1781]: I0209 19:45:15.275549 1781 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:45:15.275000 audit[1827]: NETFILTER_CFG table=mangle:41 family=10 entries=2 op=nft_register_chain pid=1827 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.275000 audit[1827]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff94590420 a2=0 a3=7fff9459040c items=0 ppid=1781 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.275000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 19:45:15.278093 kubelet[1781]: I0209 19:45:15.278042 1781 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:45:15.278186 kubelet[1781]: I0209 19:45:15.278169 1781 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:45:15.278287 kubelet[1781]: I0209 19:45:15.278271 1781 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:45:15.277000 audit[1829]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=1829 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.277000 audit[1829]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffca8c17280 a2=0 a3=7ffca8c1726c items=0 ppid=1781 pid=1829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.277000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:45:15.279000 audit[1830]: NETFILTER_CFG table=nat:43 family=10 entries=2 op=nft_register_chain pid=1830 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.279000 audit[1830]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd96bba6b0 a2=0 a3=7ffd96bba69c items=0 ppid=1781 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 19:45:15.279000 audit[1831]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=1831 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.279000 audit[1831]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe61674650 a2=0 a3=7ffe6167463c items=0 ppid=1781 pid=1831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.279000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:45:15.280000 audit[1833]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=1833 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:15.280000 audit[1833]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6bc8ea50 a2=0 a3=7ffd6bc8ea3c items=0 ppid=1781 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.280000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:45:15.281670 kubelet[1781]: I0209 19:45:15.281654 1781 policy_none.go:49] "None policy: Start" Feb 9 19:45:15.281000 audit[1834]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=1834 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.281000 audit[1834]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd75324130 a2=0 a3=7ffd7532411c items=0 ppid=1781 pid=1834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.281000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 19:45:15.282222 kubelet[1781]: I0209 19:45:15.282109 1781 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:45:15.282222 kubelet[1781]: I0209 19:45:15.282127 1781 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:45:15.282000 audit[1835]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=1835 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.282000 audit[1835]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fff191a4f30 a2=0 a3=7fff191a4f1c items=0 ppid=1781 pid=1835 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 19:45:15.284000 audit[1837]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=1837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.284000 audit[1837]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffc5e99ccc0 a2=0 a3=7ffc5e99ccac items=0 ppid=1781 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.284000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 19:45:15.285000 audit[1838]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=1838 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.285000 audit[1838]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcd48a8240 a2=0 a3=7ffcd48a822c items=0 ppid=1781 pid=1838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.285000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 19:45:15.286000 audit[1839]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=1839 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.286000 audit[1839]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff275a7720 a2=0 a3=7fff275a770c items=0 ppid=1781 pid=1839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.286000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 19:45:15.287000 audit[1841]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=1841 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.287000 audit[1841]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffec8bea550 a2=0 a3=7ffec8bea53c items=0 ppid=1781 pid=1841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.287000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 19:45:15.288187 kubelet[1781]: I0209 19:45:15.288155 1781 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:45:15.287000 audit[1781]: AVC avc: denied { mac_admin } for pid=1781 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:45:15.287000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:45:15.287000 audit[1781]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a64240 a1=c000a3d8c0 a2=c000a64210 a3=25 items=0 ppid=1 pid=1781 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.287000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:45:15.288361 kubelet[1781]: I0209 19:45:15.288233 1781 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:45:15.288450 kubelet[1781]: I0209 19:45:15.288433 1781 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:45:15.289685 kubelet[1781]: E0209 19:45:15.289669 1781 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 9 19:45:15.289000 audit[1843]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=1843 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.289000 audit[1843]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe45d83ee0 a2=0 a3=7ffe45d83ecc items=0 ppid=1781 pid=1843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.289000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 19:45:15.291000 audit[1845]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=1845 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.291000 audit[1845]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffd390f68e0 a2=0 a3=7ffd390f68cc items=0 ppid=1781 pid=1845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.291000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 19:45:15.293000 audit[1847]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=1847 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.293000 audit[1847]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffc8e468a60 a2=0 a3=7ffc8e468a4c items=0 ppid=1781 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.293000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 19:45:15.295000 audit[1849]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=1849 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.295000 audit[1849]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffd1e061fc0 a2=0 a3=7ffd1e061fac items=0 ppid=1781 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.295000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 19:45:15.296005 kubelet[1781]: I0209 19:45:15.295975 1781 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:45:15.296005 kubelet[1781]: I0209 19:45:15.296000 1781 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:45:15.296073 kubelet[1781]: I0209 19:45:15.296018 1781 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:45:15.296073 kubelet[1781]: E0209 19:45:15.296057 1781 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:45:15.296633 kubelet[1781]: W0209 19:45:15.296433 1781 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:15.296725 kubelet[1781]: E0209 19:45:15.296709 1781 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:15.296000 audit[1850]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.296000 audit[1850]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1ef268d0 a2=0 a3=7ffe1ef268bc items=0 ppid=1781 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.296000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 19:45:15.297000 audit[1851]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=1851 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.297000 audit[1851]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9cd15000 a2=0 a3=7ffe9cd14fec items=0 ppid=1781 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.297000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 19:45:15.298000 audit[1852]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:15.298000 audit[1852]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff0dc5f5e0 a2=0 a3=7fff0dc5f5cc items=0 ppid=1781 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:15.298000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 19:45:15.339957 kubelet[1781]: I0209 19:45:15.339914 1781 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:45:15.340232 kubelet[1781]: E0209 19:45:15.340215 1781 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Feb 9 19:45:15.396630 kubelet[1781]: I0209 19:45:15.396575 1781 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:15.397845 kubelet[1781]: I0209 19:45:15.397822 1781 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:15.398584 kubelet[1781]: I0209 19:45:15.398547 1781 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:15.399117 kubelet[1781]: I0209 19:45:15.399096 1781 status_manager.go:698] "Failed to get status for pod" podUID=550020dd9f101bcc23e1d3c651841c4d pod="kube-system/kube-controller-manager-localhost" err="Get \"https://10.0.0.60:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 10.0.0.60:6443: connect: connection refused" Feb 9 19:45:15.399658 kubelet[1781]: I0209 19:45:15.399636 1781 status_manager.go:698] "Failed to get status for pod" podUID=72ae17a74a2eae76daac6d298477aff0 pod="kube-system/kube-scheduler-localhost" err="Get \"https://10.0.0.60:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 10.0.0.60:6443: connect: connection refused" Feb 9 19:45:15.399804 kubelet[1781]: I0209 19:45:15.399789 1781 status_manager.go:698] "Failed to get status for pod" podUID=9a08ffd96d6af4e90880feeb254c69ef pod="kube-system/kube-apiserver-localhost" err="Get \"https://10.0.0.60:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 10.0.0.60:6443: connect: connection refused" Feb 9 19:45:15.440122 kubelet[1781]: E0209 19:45:15.440073 1781 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:15.540685 kubelet[1781]: I0209 19:45:15.540571 1781 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:15.540685 kubelet[1781]: I0209 19:45:15.540675 1781 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:15.540814 kubelet[1781]: I0209 19:45:15.540732 1781 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a08ffd96d6af4e90880feeb254c69ef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9a08ffd96d6af4e90880feeb254c69ef\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:15.540814 kubelet[1781]: I0209 19:45:15.540763 1781 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a08ffd96d6af4e90880feeb254c69ef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9a08ffd96d6af4e90880feeb254c69ef\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:15.540814 kubelet[1781]: I0209 19:45:15.540785 1781 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:15.540814 kubelet[1781]: I0209 19:45:15.540802 1781 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:15.540906 kubelet[1781]: I0209 19:45:15.540829 1781 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:15.540906 kubelet[1781]: I0209 19:45:15.540848 1781 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 19:45:15.540906 kubelet[1781]: I0209 19:45:15.540866 1781 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a08ffd96d6af4e90880feeb254c69ef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9a08ffd96d6af4e90880feeb254c69ef\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:15.541301 kubelet[1781]: I0209 19:45:15.541288 1781 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:45:15.541705 kubelet[1781]: E0209 19:45:15.541683 1781 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Feb 9 19:45:15.701306 kubelet[1781]: E0209 19:45:15.701264 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:15.701900 env[1210]: time="2024-02-09T19:45:15.701868612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:15.702941 kubelet[1781]: E0209 19:45:15.702910 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:15.703198 env[1210]: time="2024-02-09T19:45:15.703157669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:15.703566 kubelet[1781]: E0209 19:45:15.703547 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:15.703791 env[1210]: time="2024-02-09T19:45:15.703768154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9a08ffd96d6af4e90880feeb254c69ef,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:15.840818 kubelet[1781]: E0209 19:45:15.840731 1781 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:15.943173 kubelet[1781]: I0209 19:45:15.943133 1781 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:45:15.943464 kubelet[1781]: E0209 19:45:15.943452 1781 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Feb 9 19:45:16.203379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687976026.mount: Deactivated successfully. Feb 9 19:45:16.206961 env[1210]: time="2024-02-09T19:45:16.206928890Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:16.210453 env[1210]: time="2024-02-09T19:45:16.210403144Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:16.212068 env[1210]: time="2024-02-09T19:45:16.212040574Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:16.213256 env[1210]: time="2024-02-09T19:45:16.213232930Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:16.214664 env[1210]: time="2024-02-09T19:45:16.214636021Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:16.214811 kubelet[1781]: W0209 19:45:16.214757 1781 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:16.215048 kubelet[1781]: E0209 19:45:16.214825 1781 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.60:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.60:6443: connect: connection refused Feb 9 19:45:16.215924 env[1210]: time="2024-02-09T19:45:16.215900943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:16.217110 env[1210]: time="2024-02-09T19:45:16.217080975Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:16.218315 env[1210]: time="2024-02-09T19:45:16.218260387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:16.220003 env[1210]: time="2024-02-09T19:45:16.219974360Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:16.220703 env[1210]: time="2024-02-09T19:45:16.220665085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:16.222675 env[1210]: time="2024-02-09T19:45:16.222637484Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:16.224381 env[1210]: time="2024-02-09T19:45:16.224349544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:16.242343 env[1210]: time="2024-02-09T19:45:16.242289882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:16.242532 env[1210]: time="2024-02-09T19:45:16.242508422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:16.242617 env[1210]: time="2024-02-09T19:45:16.242594713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:16.242885 env[1210]: time="2024-02-09T19:45:16.242852707Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d658f3e75f9bfe7afe54ada01e6b67fd1c9d5318835dabdc4dd57fb79da0a1a pid=1861 runtime=io.containerd.runc.v2 Feb 9 19:45:16.247513 env[1210]: time="2024-02-09T19:45:16.247458643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:16.247630 env[1210]: time="2024-02-09T19:45:16.247527522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:16.247630 env[1210]: time="2024-02-09T19:45:16.247547770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:16.247731 env[1210]: time="2024-02-09T19:45:16.247702951Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a5c549842f43c58cbbbd862b6deec2f88f3c5d3b2f18422259577cdc4f2698b pid=1879 runtime=io.containerd.runc.v2 Feb 9 19:45:16.252570 env[1210]: time="2024-02-09T19:45:16.252491861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:16.252570 env[1210]: time="2024-02-09T19:45:16.252539510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:16.252764 env[1210]: time="2024-02-09T19:45:16.252731640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:16.253146 env[1210]: time="2024-02-09T19:45:16.253105311Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46ceca616235ced90371b85f5b60225de82304f1cf679c16d2b6e234a0049f44 pid=1904 runtime=io.containerd.runc.v2 Feb 9 19:45:16.301840 env[1210]: time="2024-02-09T19:45:16.301788291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9a08ffd96d6af4e90880feeb254c69ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d658f3e75f9bfe7afe54ada01e6b67fd1c9d5318835dabdc4dd57fb79da0a1a\"" Feb 9 19:45:16.302888 kubelet[1781]: E0209 19:45:16.302858 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:16.305498 env[1210]: time="2024-02-09T19:45:16.305469293Z" level=info msg="CreateContainer within sandbox \"3d658f3e75f9bfe7afe54ada01e6b67fd1c9d5318835dabdc4dd57fb79da0a1a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:45:16.308571 env[1210]: time="2024-02-09T19:45:16.308549469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae17a74a2eae76daac6d298477aff0,Namespace:kube-system,Attempt:0,} returns sandbox id \"46ceca616235ced90371b85f5b60225de82304f1cf679c16d2b6e234a0049f44\"" Feb 9 19:45:16.309081 kubelet[1781]: E0209 19:45:16.309063 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:16.310290 env[1210]: time="2024-02-09T19:45:16.310270826Z" level=info msg="CreateContainer within sandbox \"46ceca616235ced90371b85f5b60225de82304f1cf679c16d2b6e234a0049f44\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:45:16.315026 env[1210]: time="2024-02-09T19:45:16.314983683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:550020dd9f101bcc23e1d3c651841c4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a5c549842f43c58cbbbd862b6deec2f88f3c5d3b2f18422259577cdc4f2698b\"" Feb 9 19:45:16.316107 kubelet[1781]: E0209 19:45:16.316089 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:16.318837 env[1210]: time="2024-02-09T19:45:16.318797183Z" level=info msg="CreateContainer within sandbox \"3a5c549842f43c58cbbbd862b6deec2f88f3c5d3b2f18422259577cdc4f2698b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:45:16.327311 env[1210]: time="2024-02-09T19:45:16.327264379Z" level=info msg="CreateContainer within sandbox \"3d658f3e75f9bfe7afe54ada01e6b67fd1c9d5318835dabdc4dd57fb79da0a1a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4e0b4071940747344d0cf8a3b04b2630b044db12d70daeffe5deee24e55fcdd2\"" Feb 9 19:45:16.327854 env[1210]: time="2024-02-09T19:45:16.327823447Z" level=info msg="StartContainer for \"4e0b4071940747344d0cf8a3b04b2630b044db12d70daeffe5deee24e55fcdd2\"" Feb 9 19:45:16.338952 env[1210]: time="2024-02-09T19:45:16.338922408Z" level=info msg="CreateContainer within sandbox \"3a5c549842f43c58cbbbd862b6deec2f88f3c5d3b2f18422259577cdc4f2698b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bc6877c3cb7c2f3edb6903475b2c47bba8ddca7909084033eccb62b455def06c\"" Feb 9 19:45:16.339484 env[1210]: time="2024-02-09T19:45:16.339452652Z" level=info msg="StartContainer for \"bc6877c3cb7c2f3edb6903475b2c47bba8ddca7909084033eccb62b455def06c\"" Feb 9 19:45:16.347402 env[1210]: time="2024-02-09T19:45:16.347368925Z" level=info msg="CreateContainer within sandbox \"46ceca616235ced90371b85f5b60225de82304f1cf679c16d2b6e234a0049f44\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"02d4de16ee3936d5bbca1e2024d8ea87a8a8cc1b028f200b79055df0f49b7ae8\"" Feb 9 19:45:16.347692 env[1210]: time="2024-02-09T19:45:16.347674678Z" level=info msg="StartContainer for \"02d4de16ee3936d5bbca1e2024d8ea87a8a8cc1b028f200b79055df0f49b7ae8\"" Feb 9 19:45:16.390779 env[1210]: time="2024-02-09T19:45:16.390731891Z" level=info msg="StartContainer for \"4e0b4071940747344d0cf8a3b04b2630b044db12d70daeffe5deee24e55fcdd2\" returns successfully" Feb 9 19:45:16.406784 env[1210]: time="2024-02-09T19:45:16.406620291Z" level=info msg="StartContainer for \"bc6877c3cb7c2f3edb6903475b2c47bba8ddca7909084033eccb62b455def06c\" returns successfully" Feb 9 19:45:16.430485 env[1210]: time="2024-02-09T19:45:16.430425787Z" level=info msg="StartContainer for \"02d4de16ee3936d5bbca1e2024d8ea87a8a8cc1b028f200b79055df0f49b7ae8\" returns successfully" Feb 9 19:45:16.745165 kubelet[1781]: I0209 19:45:16.745140 1781 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:45:17.302210 kubelet[1781]: E0209 19:45:17.302170 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:17.303936 kubelet[1781]: E0209 19:45:17.303915 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:17.305643 kubelet[1781]: E0209 19:45:17.305622 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:17.796986 kubelet[1781]: E0209 19:45:17.796940 1781 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 9 19:45:17.915346 kubelet[1781]: I0209 19:45:17.915293 1781 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 19:45:18.236504 kubelet[1781]: I0209 19:45:18.236468 1781 apiserver.go:52] "Watching apiserver" Feb 9 19:45:18.439205 kubelet[1781]: I0209 19:45:18.439170 1781 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:45:18.453559 kubelet[1781]: I0209 19:45:18.453522 1781 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:45:18.839415 kubelet[1781]: E0209 19:45:18.839380 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:19.044993 kubelet[1781]: E0209 19:45:19.044953 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:19.238272 kubelet[1781]: E0209 19:45:19.238202 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:19.308405 kubelet[1781]: E0209 19:45:19.308367 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:19.308543 kubelet[1781]: E0209 19:45:19.308475 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:19.308781 kubelet[1781]: E0209 19:45:19.308763 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:20.309941 kubelet[1781]: E0209 19:45:20.309907 1781 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:20.656353 systemd[1]: Reloading. Feb 9 19:45:20.719956 /usr/lib/systemd/system-generators/torcx-generator[2118]: time="2024-02-09T19:45:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:45:20.719988 /usr/lib/systemd/system-generators/torcx-generator[2118]: time="2024-02-09T19:45:20Z" level=info msg="torcx already run" Feb 9 19:45:20.784051 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:45:20.784070 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:45:20.805913 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:45:20.883621 kubelet[1781]: I0209 19:45:20.883590 1781 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:45:20.883741 systemd[1]: Stopping kubelet.service... Feb 9 19:45:20.899744 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:45:20.900065 systemd[1]: Stopped kubelet.service. Feb 9 19:45:20.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:20.900725 kernel: kauditd_printk_skb: 104 callbacks suppressed Feb 9 19:45:20.900762 kernel: audit: type=1131 audit(1707507920.899:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:20.901690 systemd[1]: Started kubelet.service. Feb 9 19:45:20.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:20.906456 kernel: audit: type=1130 audit(1707507920.900:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:20.948191 kubelet[2166]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:45:20.948191 kubelet[2166]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:45:20.948651 kubelet[2166]: I0209 19:45:20.948222 2166 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:45:20.949510 kubelet[2166]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:45:20.949510 kubelet[2166]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:45:20.954269 kubelet[2166]: I0209 19:45:20.953990 2166 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:45:20.954269 kubelet[2166]: I0209 19:45:20.954007 2166 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:45:20.954269 kubelet[2166]: I0209 19:45:20.954171 2166 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:45:20.955350 kubelet[2166]: I0209 19:45:20.955325 2166 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:45:20.955919 kubelet[2166]: I0209 19:45:20.955874 2166 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:45:20.959475 kubelet[2166]: I0209 19:45:20.959453 2166 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:45:20.959813 kubelet[2166]: I0209 19:45:20.959789 2166 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:45:20.959873 kubelet[2166]: I0209 19:45:20.959852 2166 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:45:20.959873 kubelet[2166]: I0209 19:45:20.959870 2166 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:45:20.959999 kubelet[2166]: I0209 19:45:20.959879 2166 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:45:20.959999 kubelet[2166]: I0209 19:45:20.959904 2166 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:45:20.962604 kubelet[2166]: I0209 19:45:20.962589 2166 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:45:20.962667 kubelet[2166]: I0209 19:45:20.962608 2166 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:45:20.962667 kubelet[2166]: I0209 19:45:20.962627 2166 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:45:20.962667 kubelet[2166]: I0209 19:45:20.962642 2166 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:45:20.963162 kubelet[2166]: I0209 19:45:20.963132 2166 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:45:20.963681 kubelet[2166]: I0209 19:45:20.963657 2166 server.go:1186] "Started kubelet" Feb 9 19:45:20.963933 kubelet[2166]: I0209 19:45:20.963913 2166 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:45:20.964466 kubelet[2166]: I0209 19:45:20.964440 2166 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:45:20.976346 kernel: audit: type=1400 audit(1707507920.966:222): avc: denied { mac_admin } for pid=2166 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:45:20.976460 kernel: audit: type=1401 audit(1707507920.966:222): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:45:20.976475 kernel: audit: type=1300 audit(1707507920.966:222): arch=c000003e syscall=188 success=no exit=-22 a0=c000615440 a1=c000243ae8 a2=c000615410 a3=25 items=0 ppid=1 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:20.976489 kernel: audit: type=1327 audit(1707507920.966:222): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:45:20.966000 audit[2166]: AVC avc: denied { mac_admin } for pid=2166 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:45:20.966000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:45:20.966000 audit[2166]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000615440 a1=c000243ae8 a2=c000615410 a3=25 items=0 ppid=1 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:20.966000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:45:20.976710 kubelet[2166]: I0209 19:45:20.967260 2166 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 19:45:20.976710 kubelet[2166]: I0209 19:45:20.967288 2166 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 19:45:20.976710 kubelet[2166]: I0209 19:45:20.967305 2166 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:45:20.976710 kubelet[2166]: I0209 19:45:20.973357 2166 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:45:20.976710 kubelet[2166]: I0209 19:45:20.974522 2166 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:45:20.966000 audit[2166]: AVC avc: denied { mac_admin } for pid=2166 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:45:20.982018 kernel: audit: type=1400 audit(1707507920.966:223): avc: denied { mac_admin } for pid=2166 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:45:20.982062 kernel: audit: type=1401 audit(1707507920.966:223): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:45:20.966000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:45:20.987412 kernel: audit: type=1300 audit(1707507920.966:223): arch=c000003e syscall=188 success=no exit=-22 a0=c000f9c240 a1=c000243b00 a2=c0006154d0 a3=25 items=0 ppid=1 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:20.966000 audit[2166]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000f9c240 a1=c000243b00 a2=c0006154d0 a3=25 items=0 ppid=1 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:20.966000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:45:20.990777 kubelet[2166]: E0209 19:45:20.988986 2166 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:45:20.990777 kubelet[2166]: E0209 19:45:20.989012 2166 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:45:20.991422 kernel: audit: type=1327 audit(1707507920.966:223): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:45:21.008017 kubelet[2166]: I0209 19:45:21.007996 2166 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:45:21.027620 kubelet[2166]: I0209 19:45:21.027585 2166 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:45:21.027620 kubelet[2166]: I0209 19:45:21.027607 2166 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:45:21.027941 kubelet[2166]: I0209 19:45:21.027662 2166 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:45:21.027941 kubelet[2166]: E0209 19:45:21.027713 2166 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:45:21.054291 kubelet[2166]: I0209 19:45:21.054260 2166 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:45:21.054291 kubelet[2166]: I0209 19:45:21.054281 2166 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:45:21.054291 kubelet[2166]: I0209 19:45:21.054302 2166 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:45:21.054535 kubelet[2166]: I0209 19:45:21.054464 2166 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:45:21.054535 kubelet[2166]: I0209 19:45:21.054478 2166 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:45:21.054535 kubelet[2166]: I0209 19:45:21.054485 2166 policy_none.go:49] "None policy: Start" Feb 9 19:45:21.055492 kubelet[2166]: I0209 19:45:21.055447 2166 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:45:21.055540 kubelet[2166]: I0209 19:45:21.055517 2166 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:45:21.056038 kubelet[2166]: I0209 19:45:21.055759 2166 state_mem.go:75] "Updated machine memory state" Feb 9 19:45:21.057036 kubelet[2166]: I0209 19:45:21.057017 2166 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:45:21.056000 audit[2166]: AVC avc: denied { mac_admin } for pid=2166 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:45:21.056000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 19:45:21.056000 audit[2166]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001935530 a1=c0018adf08 a2=c001935500 a3=25 items=0 ppid=1 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:21.056000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 19:45:21.057294 kubelet[2166]: I0209 19:45:21.057080 2166 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 19:45:21.057294 kubelet[2166]: I0209 19:45:21.057288 2166 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:45:21.076527 kubelet[2166]: I0209 19:45:21.076487 2166 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Feb 9 19:45:21.082881 kubelet[2166]: I0209 19:45:21.082123 2166 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Feb 9 19:45:21.082881 kubelet[2166]: I0209 19:45:21.082178 2166 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Feb 9 19:45:21.128260 kubelet[2166]: I0209 19:45:21.128223 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:21.128406 kubelet[2166]: I0209 19:45:21.128322 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:21.128406 kubelet[2166]: I0209 19:45:21.128347 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:21.133360 kubelet[2166]: E0209 19:45:21.133327 2166 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:21.167486 kubelet[2166]: E0209 19:45:21.167449 2166 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:21.276419 kubelet[2166]: I0209 19:45:21.276295 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a08ffd96d6af4e90880feeb254c69ef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9a08ffd96d6af4e90880feeb254c69ef\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:21.276419 kubelet[2166]: I0209 19:45:21.276344 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a08ffd96d6af4e90880feeb254c69ef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9a08ffd96d6af4e90880feeb254c69ef\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:21.276419 kubelet[2166]: I0209 19:45:21.276373 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:21.276419 kubelet[2166]: I0209 19:45:21.276415 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae17a74a2eae76daac6d298477aff0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae17a74a2eae76daac6d298477aff0\") " pod="kube-system/kube-scheduler-localhost" Feb 9 19:45:21.276610 kubelet[2166]: I0209 19:45:21.276449 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a08ffd96d6af4e90880feeb254c69ef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9a08ffd96d6af4e90880feeb254c69ef\") " pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:21.276610 kubelet[2166]: I0209 19:45:21.276481 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:21.276610 kubelet[2166]: I0209 19:45:21.276510 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:21.276610 kubelet[2166]: I0209 19:45:21.276544 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:21.276610 kubelet[2166]: I0209 19:45:21.276564 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/550020dd9f101bcc23e1d3c651841c4d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"550020dd9f101bcc23e1d3c651841c4d\") " pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:21.397326 kubelet[2166]: E0209 19:45:21.397303 2166 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 19:45:21.398084 kubelet[2166]: E0209 19:45:21.398065 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:21.434330 kubelet[2166]: E0209 19:45:21.434285 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:21.468762 kubelet[2166]: E0209 19:45:21.468724 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:21.963316 kubelet[2166]: I0209 19:45:21.963238 2166 apiserver.go:52] "Watching apiserver" Feb 9 19:45:21.974824 kubelet[2166]: I0209 19:45:21.974796 2166 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:45:21.980985 kubelet[2166]: I0209 19:45:21.980955 2166 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:45:22.168659 kubelet[2166]: E0209 19:45:22.168615 2166 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 9 19:45:22.169003 kubelet[2166]: E0209 19:45:22.168981 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:22.586734 kubelet[2166]: E0209 19:45:22.586696 2166 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 9 19:45:22.586989 kubelet[2166]: E0209 19:45:22.586959 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:22.797788 kubelet[2166]: E0209 19:45:22.797749 2166 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 9 19:45:22.798280 kubelet[2166]: E0209 19:45:22.798264 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:23.037244 kubelet[2166]: E0209 19:45:23.037218 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:23.040345 kubelet[2166]: E0209 19:45:23.039561 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:23.040645 kubelet[2166]: E0209 19:45:23.040632 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:23.367412 kubelet[2166]: I0209 19:45:23.367370 2166 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.367324447 pod.CreationTimestamp="2024-02-09 19:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:45:22.975596328 +0000 UTC m=+2.068754634" watchObservedRunningTime="2024-02-09 19:45:23.367324447 +0000 UTC m=+2.460482743" Feb 9 19:45:23.771951 kubelet[2166]: I0209 19:45:23.771830 2166 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=4.771786262 pod.CreationTimestamp="2024-02-09 19:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:45:23.367740643 +0000 UTC m=+2.460898949" watchObservedRunningTime="2024-02-09 19:45:23.771786262 +0000 UTC m=+2.864944598" Feb 9 19:45:25.932060 kubelet[2166]: E0209 19:45:25.932004 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:26.040220 sudo[1360]: pam_unix(sudo:session): session closed for user root Feb 9 19:45:26.038000 audit[1360]: USER_END pid=1360 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:26.040956 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 9 19:45:26.041027 kernel: audit: type=1106 audit(1707507926.038:225): pid=1360 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:26.042132 sshd[1355]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:26.038000 audit[1360]: CRED_DISP pid=1360 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:26.044554 systemd[1]: sshd@6-10.0.0.60:22-10.0.0.1:43388.service: Deactivated successfully. Feb 9 19:45:26.045669 systemd-logind[1192]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:45:26.045998 kernel: audit: type=1104 audit(1707507926.038:226): pid=1360 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 19:45:26.046031 kernel: audit: type=1106 audit(1707507926.040:227): pid=1355 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:26.040000 audit[1355]: USER_END pid=1355 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:26.045684 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:45:26.046910 systemd-logind[1192]: Removed session 7. Feb 9 19:45:26.048792 kernel: audit: type=1104 audit(1707507926.040:228): pid=1355 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:26.040000 audit[1355]: CRED_DISP pid=1355 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:26.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.60:22-10.0.0.1:43388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:26.053380 kernel: audit: type=1131 audit(1707507926.043:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.60:22-10.0.0.1:43388 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:29.838007 kubelet[2166]: E0209 19:45:29.837978 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:29.850140 kubelet[2166]: I0209 19:45:29.850109 2166 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=10.850076753 pod.CreationTimestamp="2024-02-09 19:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:45:23.772432897 +0000 UTC m=+2.865591203" watchObservedRunningTime="2024-02-09 19:45:29.850076753 +0000 UTC m=+8.943235059" Feb 9 19:45:30.046569 kubelet[2166]: E0209 19:45:30.046527 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:32.503410 kubelet[2166]: E0209 19:45:32.503368 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:33.387642 update_engine[1193]: I0209 19:45:33.387593 1193 update_attempter.cc:509] Updating boot flags... Feb 9 19:45:34.655975 kubelet[2166]: I0209 19:45:34.655935 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:34.721577 kubelet[2166]: I0209 19:45:34.721542 2166 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:45:34.721897 env[1210]: time="2024-02-09T19:45:34.721863062Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:45:34.722157 kubelet[2166]: I0209 19:45:34.722016 2166 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:45:34.761954 kubelet[2166]: I0209 19:45:34.761924 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4d67c5d1-efc2-49af-b83d-0799f08e99bf-kube-proxy\") pod \"kube-proxy-5mvrc\" (UID: \"4d67c5d1-efc2-49af-b83d-0799f08e99bf\") " pod="kube-system/kube-proxy-5mvrc" Feb 9 19:45:34.762010 kubelet[2166]: I0209 19:45:34.761965 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d67c5d1-efc2-49af-b83d-0799f08e99bf-lib-modules\") pod \"kube-proxy-5mvrc\" (UID: \"4d67c5d1-efc2-49af-b83d-0799f08e99bf\") " pod="kube-system/kube-proxy-5mvrc" Feb 9 19:45:34.762010 kubelet[2166]: I0209 19:45:34.761988 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d7wv\" (UniqueName: \"kubernetes.io/projected/4d67c5d1-efc2-49af-b83d-0799f08e99bf-kube-api-access-7d7wv\") pod \"kube-proxy-5mvrc\" (UID: \"4d67c5d1-efc2-49af-b83d-0799f08e99bf\") " pod="kube-system/kube-proxy-5mvrc" Feb 9 19:45:34.762056 kubelet[2166]: I0209 19:45:34.762016 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d67c5d1-efc2-49af-b83d-0799f08e99bf-xtables-lock\") pod \"kube-proxy-5mvrc\" (UID: \"4d67c5d1-efc2-49af-b83d-0799f08e99bf\") " pod="kube-system/kube-proxy-5mvrc" Feb 9 19:45:34.834148 kubelet[2166]: I0209 19:45:34.834112 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:34.961880 kubelet[2166]: E0209 19:45:34.961799 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:34.962299 env[1210]: time="2024-02-09T19:45:34.962227607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5mvrc,Uid:4d67c5d1-efc2-49af-b83d-0799f08e99bf,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:34.962767 kubelet[2166]: I0209 19:45:34.962746 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzhbc\" (UniqueName: \"kubernetes.io/projected/94fc7091-5a0c-4f77-9bac-72970ed6c1d5-kube-api-access-nzhbc\") pod \"tigera-operator-cfc98749c-2kfc4\" (UID: \"94fc7091-5a0c-4f77-9bac-72970ed6c1d5\") " pod="tigera-operator/tigera-operator-cfc98749c-2kfc4" Feb 9 19:45:34.962840 kubelet[2166]: I0209 19:45:34.962825 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/94fc7091-5a0c-4f77-9bac-72970ed6c1d5-var-lib-calico\") pod \"tigera-operator-cfc98749c-2kfc4\" (UID: \"94fc7091-5a0c-4f77-9bac-72970ed6c1d5\") " pod="tigera-operator/tigera-operator-cfc98749c-2kfc4" Feb 9 19:45:34.975937 env[1210]: time="2024-02-09T19:45:34.975882632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:34.975937 env[1210]: time="2024-02-09T19:45:34.975913339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:34.975937 env[1210]: time="2024-02-09T19:45:34.975922618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:34.976103 env[1210]: time="2024-02-09T19:45:34.976013539Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a50234bf4c2d6faf6af6b3fde3d8555ae99458b57f2ec5ad85478f8fa1279cd pid=2295 runtime=io.containerd.runc.v2 Feb 9 19:45:34.985615 systemd[1]: run-containerd-runc-k8s.io-0a50234bf4c2d6faf6af6b3fde3d8555ae99458b57f2ec5ad85478f8fa1279cd-runc.jCWcsg.mount: Deactivated successfully. Feb 9 19:45:35.003728 env[1210]: time="2024-02-09T19:45:35.003684579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5mvrc,Uid:4d67c5d1-efc2-49af-b83d-0799f08e99bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a50234bf4c2d6faf6af6b3fde3d8555ae99458b57f2ec5ad85478f8fa1279cd\"" Feb 9 19:45:35.004225 kubelet[2166]: E0209 19:45:35.004208 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:35.005908 env[1210]: time="2024-02-09T19:45:35.005876224Z" level=info msg="CreateContainer within sandbox \"0a50234bf4c2d6faf6af6b3fde3d8555ae99458b57f2ec5ad85478f8fa1279cd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:45:35.024036 env[1210]: time="2024-02-09T19:45:35.023996309Z" level=info msg="CreateContainer within sandbox \"0a50234bf4c2d6faf6af6b3fde3d8555ae99458b57f2ec5ad85478f8fa1279cd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"15c01e26e0c5eaa18d224f00958868151c09116cc50a20f39a57b9a09552ad61\"" Feb 9 19:45:35.025693 env[1210]: time="2024-02-09T19:45:35.025662681Z" level=info msg="StartContainer for \"15c01e26e0c5eaa18d224f00958868151c09116cc50a20f39a57b9a09552ad61\"" Feb 9 19:45:35.073528 env[1210]: time="2024-02-09T19:45:35.073471524Z" level=info msg="StartContainer for \"15c01e26e0c5eaa18d224f00958868151c09116cc50a20f39a57b9a09552ad61\" returns successfully" Feb 9 19:45:35.107000 audit[2385]: NETFILTER_CFG table=mangle:59 family=2 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.107000 audit[2385]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf12b5130 a2=0 a3=7ffdf12b511c items=0 ppid=2345 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.114530 kernel: audit: type=1325 audit(1707507935.107:230): table=mangle:59 family=2 entries=1 op=nft_register_chain pid=2385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.114593 kernel: audit: type=1300 audit(1707507935.107:230): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf12b5130 a2=0 a3=7ffdf12b511c items=0 ppid=2345 pid=2385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.107000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:45:35.116197 kernel: audit: type=1327 audit(1707507935.107:230): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:45:35.116250 kernel: audit: type=1325 audit(1707507935.107:231): table=nat:60 family=2 entries=1 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.107000 audit[2387]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_chain pid=2387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.107000 audit[2387]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea5152dc0 a2=0 a3=7ffea5152dac items=0 ppid=2345 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.122004 kernel: audit: type=1300 audit(1707507935.107:231): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea5152dc0 a2=0 a3=7ffea5152dac items=0 ppid=2345 pid=2387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.124047 kernel: audit: type=1327 audit(1707507935.107:231): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:45:35.107000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:45:35.126046 kernel: audit: type=1325 audit(1707507935.110:232): table=filter:61 family=2 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.110000 audit[2388]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=2388 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.130839 kernel: audit: type=1300 audit(1707507935.110:232): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8ce432e0 a2=0 a3=7fff8ce432cc items=0 ppid=2345 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.110000 audit[2388]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff8ce432e0 a2=0 a3=7fff8ce432cc items=0 ppid=2345 pid=2388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.110000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:45:35.134484 kernel: audit: type=1327 audit(1707507935.110:232): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:45:35.117000 audit[2386]: NETFILTER_CFG table=mangle:62 family=10 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.137409 kernel: audit: type=1325 audit(1707507935.117:233): table=mangle:62 family=10 entries=1 op=nft_register_chain pid=2386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.117000 audit[2386]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd043e3060 a2=0 a3=7ffd043e304c items=0 ppid=2345 pid=2386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.117000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 19:45:35.136000 audit[2391]: NETFILTER_CFG table=nat:63 family=10 entries=1 op=nft_register_chain pid=2391 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.136000 audit[2391]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca9d455f0 a2=0 a3=7ffca9d455dc items=0 ppid=2345 pid=2391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.136000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 19:45:35.140499 env[1210]: time="2024-02-09T19:45:35.140454565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-2kfc4,Uid:94fc7091-5a0c-4f77-9bac-72970ed6c1d5,Namespace:tigera-operator,Attempt:0,}" Feb 9 19:45:35.161000 audit[2392]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=2392 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.161000 audit[2392]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffedd59c100 a2=0 a3=7ffedd59c0ec items=0 ppid=2345 pid=2392 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 19:45:35.165788 env[1210]: time="2024-02-09T19:45:35.165727961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:35.165933 env[1210]: time="2024-02-09T19:45:35.165907631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:35.166046 env[1210]: time="2024-02-09T19:45:35.166015114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:35.166306 env[1210]: time="2024-02-09T19:45:35.166277450Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c6115147edb726ca34165d3b020492eb36511550aacf7c7956390d8c311d903 pid=2400 runtime=io.containerd.runc.v2 Feb 9 19:45:35.207808 env[1210]: time="2024-02-09T19:45:35.207770787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-2kfc4,Uid:94fc7091-5a0c-4f77-9bac-72970ed6c1d5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3c6115147edb726ca34165d3b020492eb36511550aacf7c7956390d8c311d903\"" Feb 9 19:45:35.210468 env[1210]: time="2024-02-09T19:45:35.209071197Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 9 19:45:35.211000 audit[2433]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.211000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd9323aa50 a2=0 a3=7ffd9323aa3c items=0 ppid=2345 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:45:35.213000 audit[2435]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2435 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.213000 audit[2435]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeec8f4960 a2=0 a3=7ffeec8f494c items=0 ppid=2345 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.213000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 19:45:35.216000 audit[2438]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.216000 audit[2438]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffce2e500b0 a2=0 a3=7ffce2e5009c items=0 ppid=2345 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 19:45:35.217000 audit[2439]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.217000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc610fbe50 a2=0 a3=7ffc610fbe3c items=0 ppid=2345 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.217000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:45:35.218000 audit[2441]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.218000 audit[2441]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc49b77200 a2=0 a3=7ffc49b771ec items=0 ppid=2345 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.218000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:45:35.219000 audit[2442]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.219000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeeb9c3e50 a2=0 a3=7ffeeb9c3e3c items=0 ppid=2345 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:45:35.221000 audit[2444]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.221000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff4befbba0 a2=0 a3=7fff4befbb8c items=0 ppid=2345 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.221000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:45:35.224000 audit[2447]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.224000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd7f773d70 a2=0 a3=7ffd7f773d5c items=0 ppid=2345 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 19:45:35.225000 audit[2448]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.225000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffceb500580 a2=0 a3=7ffceb50056c items=0 ppid=2345 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.225000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:45:35.227000 audit[2450]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.227000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffbc56d340 a2=0 a3=7fffbc56d32c items=0 ppid=2345 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.227000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:45:35.228000 audit[2451]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.228000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff2fe4dd00 a2=0 a3=7fff2fe4dcec items=0 ppid=2345 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.228000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:45:35.230000 audit[2453]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.230000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe66f16b20 a2=0 a3=7ffe66f16b0c items=0 ppid=2345 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:45:35.234000 audit[2456]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.234000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe8d5a75d0 a2=0 a3=7ffe8d5a75bc items=0 ppid=2345 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:45:35.237000 audit[2459]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.237000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe068226e0 a2=0 a3=7ffe068226cc items=0 ppid=2345 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.237000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:45:35.238000 audit[2460]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.238000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcdfb46050 a2=0 a3=7ffcdfb4603c items=0 ppid=2345 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:45:35.240000 audit[2462]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.240000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fffa0788710 a2=0 a3=7fffa07886fc items=0 ppid=2345 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.240000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:45:35.243000 audit[2465]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 19:45:35.243000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc8ee241f0 a2=0 a3=7ffc8ee241dc items=0 ppid=2345 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.243000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:45:35.251000 audit[2469]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:35.251000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fffc1731c70 a2=0 a3=7fffc1731c5c items=0 ppid=2345 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:35.255000 audit[2469]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:35.255000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fffc1731c70 a2=0 a3=7fffc1731c5c items=0 ppid=2345 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.255000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:35.256000 audit[2472]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2472 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.256000 audit[2472]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff779d9690 a2=0 a3=7fff779d967c items=0 ppid=2345 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.256000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 19:45:35.258000 audit[2474]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.258000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe52411e90 a2=0 a3=7ffe52411e7c items=0 ppid=2345 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.258000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 19:45:35.262000 audit[2477]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.262000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc41c217c0 a2=0 a3=7ffc41c217ac items=0 ppid=2345 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.262000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 19:45:35.263000 audit[2478]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.263000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde8b00ad0 a2=0 a3=7ffde8b00abc items=0 ppid=2345 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 19:45:35.265000 audit[2480]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=2480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.265000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe09116410 a2=0 a3=7ffe091163fc items=0 ppid=2345 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.265000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 19:45:35.265000 audit[2481]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.265000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2e356a20 a2=0 a3=7ffc2e356a0c items=0 ppid=2345 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.265000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 19:45:35.267000 audit[2483]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.267000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc8c45bee0 a2=0 a3=7ffc8c45becc items=0 ppid=2345 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.267000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 19:45:35.269000 audit[2486]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.269000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff24ff7400 a2=0 a3=7fff24ff73ec items=0 ppid=2345 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 19:45:35.270000 audit[2487]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.270000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc362d1500 a2=0 a3=7ffc362d14ec items=0 ppid=2345 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 19:45:35.272000 audit[2489]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2489 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.272000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff1eee2c50 a2=0 a3=7fff1eee2c3c items=0 ppid=2345 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.272000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 19:45:35.273000 audit[2490]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.273000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9176e760 a2=0 a3=7ffd9176e74c items=0 ppid=2345 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 19:45:35.274000 audit[2492]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.274000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd56e6f8f0 a2=0 a3=7ffd56e6f8dc items=0 ppid=2345 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 19:45:35.277000 audit[2495]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=2495 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.277000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe556f1fe0 a2=0 a3=7ffe556f1fcc items=0 ppid=2345 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.277000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 19:45:35.279000 audit[2498]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.279000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf66dd2c0 a2=0 a3=7ffdf66dd2ac items=0 ppid=2345 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 19:45:35.280000 audit[2499]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.280000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffed1e0bbe0 a2=0 a3=7ffed1e0bbcc items=0 ppid=2345 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 19:45:35.282000 audit[2501]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.282000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff68241500 a2=0 a3=7fff682414ec items=0 ppid=2345 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:45:35.284000 audit[2504]: NETFILTER_CFG table=nat:100 family=10 entries=2 op=nft_register_chain pid=2504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 19:45:35.284000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff4293a880 a2=0 a3=7fff4293a86c items=0 ppid=2345 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.284000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 19:45:35.288000 audit[2508]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:45:35.288000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffc2c9098e0 a2=0 a3=7ffc2c9098cc items=0 ppid=2345 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.288000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:35.288000 audit[2508]: NETFILTER_CFG table=nat:102 family=10 entries=10 op=nft_register_chain pid=2508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 19:45:35.288000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=1968 a0=3 a1=7ffc2c9098e0 a2=0 a3=7ffc2c9098cc items=0 ppid=2345 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:35.288000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:35.937881 kubelet[2166]: E0209 19:45:35.937255 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:36.055314 kubelet[2166]: E0209 19:45:36.055291 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:36.154591 kubelet[2166]: E0209 19:45:36.154565 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:36.518232 kubelet[2166]: I0209 19:45:36.518147 2166 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5mvrc" podStartSLOduration=2.518100091 pod.CreationTimestamp="2024-02-09 19:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:45:36.237247174 +0000 UTC m=+15.330405500" watchObservedRunningTime="2024-02-09 19:45:36.518100091 +0000 UTC m=+15.611258407" Feb 9 19:45:37.057449 kubelet[2166]: E0209 19:45:37.057364 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:38.482465 env[1210]: time="2024-02-09T19:45:38.482414888Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:38.485109 env[1210]: time="2024-02-09T19:45:38.485055535Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:38.486889 env[1210]: time="2024-02-09T19:45:38.486855235Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:38.488685 env[1210]: time="2024-02-09T19:45:38.488650475Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:38.489310 env[1210]: time="2024-02-09T19:45:38.489278762Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:7bc79e0d3be4fa8c35133127424f9b1ec775af43145b7dd58637905c76084827\"" Feb 9 19:45:38.490806 env[1210]: time="2024-02-09T19:45:38.490771060Z" level=info msg="CreateContainer within sandbox \"3c6115147edb726ca34165d3b020492eb36511550aacf7c7956390d8c311d903\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 9 19:45:38.506364 env[1210]: time="2024-02-09T19:45:38.506283888Z" level=info msg="CreateContainer within sandbox \"3c6115147edb726ca34165d3b020492eb36511550aacf7c7956390d8c311d903\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8ed2162d964654498411050b6dce804aa24af498471008c29675fe3d6da1506d\"" Feb 9 19:45:38.507105 env[1210]: time="2024-02-09T19:45:38.506634260Z" level=info msg="StartContainer for \"8ed2162d964654498411050b6dce804aa24af498471008c29675fe3d6da1506d\"" Feb 9 19:45:38.671861 env[1210]: time="2024-02-09T19:45:38.671806950Z" level=info msg="StartContainer for \"8ed2162d964654498411050b6dce804aa24af498471008c29675fe3d6da1506d\" returns successfully" Feb 9 19:45:40.330000 audit[2572]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=2572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:40.334636 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 9 19:45:40.334700 kernel: audit: type=1325 audit(1707507940.330:274): table=filter:103 family=2 entries=13 op=nft_register_rule pid=2572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:40.334728 kernel: audit: type=1300 audit(1707507940.330:274): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffff9730c00 a2=0 a3=7ffff9730bec items=0 ppid=2345 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:40.330000 audit[2572]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffff9730c00 a2=0 a3=7ffff9730bec items=0 ppid=2345 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:40.330000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:40.339480 kernel: audit: type=1327 audit(1707507940.330:274): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:40.331000 audit[2572]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=2572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:40.331000 audit[2572]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffff9730c00 a2=0 a3=7ffff9730bec items=0 ppid=2345 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:40.346465 kernel: audit: type=1325 audit(1707507940.331:275): table=nat:104 family=2 entries=20 op=nft_register_rule pid=2572 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:40.346521 kernel: audit: type=1300 audit(1707507940.331:275): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffff9730c00 a2=0 a3=7ffff9730bec items=0 ppid=2345 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:40.346548 kernel: audit: type=1327 audit(1707507940.331:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:40.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:40.366000 audit[2598]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:40.366000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffc8ea09060 a2=0 a3=7ffc8ea0904c items=0 ppid=2345 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:40.372653 kernel: audit: type=1325 audit(1707507940.366:276): table=filter:105 family=2 entries=14 op=nft_register_rule pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:40.372701 kernel: audit: type=1300 audit(1707507940.366:276): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffc8ea09060 a2=0 a3=7ffc8ea0904c items=0 ppid=2345 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:40.372727 kernel: audit: type=1327 audit(1707507940.366:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:40.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:40.366000 audit[2598]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:40.366000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffc8ea09060 a2=0 a3=7ffc8ea0904c items=0 ppid=2345 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:40.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:40.379416 kernel: audit: type=1325 audit(1707507940.366:277): table=nat:106 family=2 entries=20 op=nft_register_rule pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:40.439929 kubelet[2166]: I0209 19:45:40.439879 2166 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-2kfc4" podStartSLOduration=-9.223372030414942e+09 pod.CreationTimestamp="2024-02-09 19:45:34 +0000 UTC" firstStartedPulling="2024-02-09 19:45:35.208531396 +0000 UTC m=+14.301689702" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:45:39.068034784 +0000 UTC m=+18.161193090" watchObservedRunningTime="2024-02-09 19:45:40.43983336 +0000 UTC m=+19.532991666" Feb 9 19:45:40.440419 kubelet[2166]: I0209 19:45:40.439995 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:40.471079 kubelet[2166]: I0209 19:45:40.471041 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:40.585672 kubelet[2166]: I0209 19:45:40.585562 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:40.585837 kubelet[2166]: E0209 19:45:40.585808 2166 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngqgb" podUID=991e9420-1ee3-42d4-b3be-2ddc6b5f52db Feb 9 19:45:40.600416 kubelet[2166]: I0209 19:45:40.600341 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-cni-log-dir\") pod \"calico-node-cf82t\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " pod="calico-system/calico-node-cf82t" Feb 9 19:45:40.600416 kubelet[2166]: I0209 19:45:40.600412 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7bc67f2a-5298-4305-9a52-654acbff2b06-node-certs\") pod \"calico-node-cf82t\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " pod="calico-system/calico-node-cf82t" Feb 9 19:45:40.600627 kubelet[2166]: I0209 19:45:40.600450 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p8x9\" (UniqueName: \"kubernetes.io/projected/0adbd313-a370-496b-9907-dd54c7fab656-kube-api-access-9p8x9\") pod \"calico-typha-5b4ddfb9d9-qdfjk\" (UID: \"0adbd313-a370-496b-9907-dd54c7fab656\") " pod="calico-system/calico-typha-5b4ddfb9d9-qdfjk" Feb 9 19:45:40.600627 kubelet[2166]: I0209 19:45:40.600478 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bc67f2a-5298-4305-9a52-654acbff2b06-tigera-ca-bundle\") pod \"calico-node-cf82t\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " pod="calico-system/calico-node-cf82t" Feb 9 19:45:40.600627 kubelet[2166]: I0209 19:45:40.600503 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-cni-net-dir\") pod \"calico-node-cf82t\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " pod="calico-system/calico-node-cf82t" Feb 9 19:45:40.600627 kubelet[2166]: I0209 19:45:40.600530 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0adbd313-a370-496b-9907-dd54c7fab656-typha-certs\") pod \"calico-typha-5b4ddfb9d9-qdfjk\" (UID: \"0adbd313-a370-496b-9907-dd54c7fab656\") " pod="calico-system/calico-typha-5b4ddfb9d9-qdfjk" Feb 9 19:45:40.600627 kubelet[2166]: I0209 19:45:40.600554 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-policysync\") pod \"calico-node-cf82t\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " pod="calico-system/calico-node-cf82t" Feb 9 19:45:40.600844 kubelet[2166]: I0209 19:45:40.600579 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-var-lib-calico\") pod \"calico-node-cf82t\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " pod="calico-system/calico-node-cf82t" Feb 9 19:45:40.600844 kubelet[2166]: I0209 19:45:40.600604 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-flexvol-driver-host\") pod \"calico-node-cf82t\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " pod="calico-system/calico-node-cf82t" Feb 9 19:45:40.600844 kubelet[2166]: I0209 19:45:40.600628 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-cni-bin-dir\") pod \"calico-node-cf82t\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " pod="calico-system/calico-node-cf82t" Feb 9 19:45:40.600844 kubelet[2166]: I0209 19:45:40.600655 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-xtables-lock\") pod \"calico-node-cf82t\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " pod="calico-system/calico-node-cf82t" Feb 9 19:45:40.600844 kubelet[2166]: I0209 19:45:40.600680 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-var-run-calico\") pod \"calico-node-cf82t\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " pod="calico-system/calico-node-cf82t" Feb 9 19:45:40.601013 kubelet[2166]: I0209 19:45:40.600705 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0adbd313-a370-496b-9907-dd54c7fab656-tigera-ca-bundle\") pod \"calico-typha-5b4ddfb9d9-qdfjk\" (UID: \"0adbd313-a370-496b-9907-dd54c7fab656\") " pod="calico-system/calico-typha-5b4ddfb9d9-qdfjk" Feb 9 19:45:40.601013 kubelet[2166]: I0209 19:45:40.600728 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-lib-modules\") pod \"calico-node-cf82t\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " pod="calico-system/calico-node-cf82t" Feb 9 19:45:40.601013 kubelet[2166]: I0209 19:45:40.600752 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fv4b\" (UniqueName: \"kubernetes.io/projected/7bc67f2a-5298-4305-9a52-654acbff2b06-kube-api-access-8fv4b\") pod \"calico-node-cf82t\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " pod="calico-system/calico-node-cf82t" Feb 9 19:45:40.701508 kubelet[2166]: I0209 19:45:40.701472 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/991e9420-1ee3-42d4-b3be-2ddc6b5f52db-kubelet-dir\") pod \"csi-node-driver-ngqgb\" (UID: \"991e9420-1ee3-42d4-b3be-2ddc6b5f52db\") " pod="calico-system/csi-node-driver-ngqgb" Feb 9 19:45:40.701816 kubelet[2166]: I0209 19:45:40.701787 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/991e9420-1ee3-42d4-b3be-2ddc6b5f52db-registration-dir\") pod \"csi-node-driver-ngqgb\" (UID: \"991e9420-1ee3-42d4-b3be-2ddc6b5f52db\") " pod="calico-system/csi-node-driver-ngqgb" Feb 9 19:45:40.702084 kubelet[2166]: E0209 19:45:40.702056 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.702084 kubelet[2166]: W0209 19:45:40.702073 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.702084 kubelet[2166]: E0209 19:45:40.702102 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.702360 kubelet[2166]: E0209 19:45:40.702343 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.702360 kubelet[2166]: W0209 19:45:40.702356 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.702471 kubelet[2166]: E0209 19:45:40.702375 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.702570 kubelet[2166]: E0209 19:45:40.702544 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.702605 kubelet[2166]: W0209 19:45:40.702570 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.702605 kubelet[2166]: E0209 19:45:40.702589 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.702844 kubelet[2166]: E0209 19:45:40.702826 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.702844 kubelet[2166]: W0209 19:45:40.702837 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.702942 kubelet[2166]: E0209 19:45:40.702854 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.703031 kubelet[2166]: E0209 19:45:40.703016 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.703031 kubelet[2166]: W0209 19:45:40.703026 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.703096 kubelet[2166]: E0209 19:45:40.703062 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.703281 kubelet[2166]: E0209 19:45:40.703265 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.703281 kubelet[2166]: W0209 19:45:40.703276 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.703397 kubelet[2166]: E0209 19:45:40.703306 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.703469 kubelet[2166]: E0209 19:45:40.703451 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.703469 kubelet[2166]: W0209 19:45:40.703463 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.703572 kubelet[2166]: E0209 19:45:40.703487 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.703640 kubelet[2166]: E0209 19:45:40.703623 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.703640 kubelet[2166]: W0209 19:45:40.703634 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.703747 kubelet[2166]: E0209 19:45:40.703657 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.703808 kubelet[2166]: E0209 19:45:40.703790 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.703808 kubelet[2166]: W0209 19:45:40.703802 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.703912 kubelet[2166]: E0209 19:45:40.703820 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.704034 kubelet[2166]: E0209 19:45:40.704010 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.704034 kubelet[2166]: W0209 19:45:40.704021 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.704132 kubelet[2166]: E0209 19:45:40.704038 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.704244 kubelet[2166]: E0209 19:45:40.704233 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.704244 kubelet[2166]: W0209 19:45:40.704241 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.704307 kubelet[2166]: E0209 19:45:40.704258 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.704501 kubelet[2166]: E0209 19:45:40.704483 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.704501 kubelet[2166]: W0209 19:45:40.704498 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.704604 kubelet[2166]: E0209 19:45:40.704517 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.704858 kubelet[2166]: E0209 19:45:40.704833 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.704858 kubelet[2166]: W0209 19:45:40.704847 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.704938 kubelet[2166]: E0209 19:45:40.704866 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.705034 kubelet[2166]: E0209 19:45:40.705019 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.705034 kubelet[2166]: W0209 19:45:40.705029 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.705102 kubelet[2166]: E0209 19:45:40.705054 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.705254 kubelet[2166]: E0209 19:45:40.705232 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.705254 kubelet[2166]: W0209 19:45:40.705251 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.705318 kubelet[2166]: E0209 19:45:40.705269 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.705444 kubelet[2166]: E0209 19:45:40.705428 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.705444 kubelet[2166]: W0209 19:45:40.705439 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.705533 kubelet[2166]: E0209 19:45:40.705456 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.705666 kubelet[2166]: E0209 19:45:40.705649 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.705666 kubelet[2166]: W0209 19:45:40.705660 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.705762 kubelet[2166]: E0209 19:45:40.705678 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.705889 kubelet[2166]: E0209 19:45:40.705862 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.705889 kubelet[2166]: W0209 19:45:40.705873 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.705889 kubelet[2166]: E0209 19:45:40.705889 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.706078 kubelet[2166]: E0209 19:45:40.706062 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.706078 kubelet[2166]: W0209 19:45:40.706073 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.706175 kubelet[2166]: E0209 19:45:40.706097 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.706175 kubelet[2166]: I0209 19:45:40.706129 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f5c6\" (UniqueName: \"kubernetes.io/projected/991e9420-1ee3-42d4-b3be-2ddc6b5f52db-kube-api-access-4f5c6\") pod \"csi-node-driver-ngqgb\" (UID: \"991e9420-1ee3-42d4-b3be-2ddc6b5f52db\") " pod="calico-system/csi-node-driver-ngqgb" Feb 9 19:45:40.706285 kubelet[2166]: E0209 19:45:40.706261 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.706285 kubelet[2166]: W0209 19:45:40.706277 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.706385 kubelet[2166]: E0209 19:45:40.706347 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.706475 kubelet[2166]: E0209 19:45:40.706457 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.706475 kubelet[2166]: W0209 19:45:40.706470 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.706572 kubelet[2166]: E0209 19:45:40.706485 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.706652 kubelet[2166]: E0209 19:45:40.706627 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.706652 kubelet[2166]: W0209 19:45:40.706643 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.706652 kubelet[2166]: E0209 19:45:40.706655 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.706834 kubelet[2166]: E0209 19:45:40.706817 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.706834 kubelet[2166]: W0209 19:45:40.706828 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.706919 kubelet[2166]: E0209 19:45:40.706844 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.707049 kubelet[2166]: E0209 19:45:40.707029 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.707049 kubelet[2166]: W0209 19:45:40.707043 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.707143 kubelet[2166]: E0209 19:45:40.707060 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.707280 kubelet[2166]: E0209 19:45:40.707261 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.707280 kubelet[2166]: W0209 19:45:40.707274 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.707363 kubelet[2166]: E0209 19:45:40.707352 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.707515 kubelet[2166]: E0209 19:45:40.707478 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.707515 kubelet[2166]: W0209 19:45:40.707496 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.707609 kubelet[2166]: E0209 19:45:40.707577 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.707703 kubelet[2166]: E0209 19:45:40.707668 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.707703 kubelet[2166]: W0209 19:45:40.707681 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.707803 kubelet[2166]: E0209 19:45:40.707739 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.707860 kubelet[2166]: E0209 19:45:40.707844 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.707860 kubelet[2166]: W0209 19:45:40.707852 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.707969 kubelet[2166]: E0209 19:45:40.707937 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.708029 kubelet[2166]: E0209 19:45:40.708011 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.708029 kubelet[2166]: W0209 19:45:40.708021 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.708029 kubelet[2166]: E0209 19:45:40.708032 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.708176 kubelet[2166]: E0209 19:45:40.708155 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.708176 kubelet[2166]: W0209 19:45:40.708165 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.708244 kubelet[2166]: E0209 19:45:40.708184 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.708369 kubelet[2166]: E0209 19:45:40.708353 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.708369 kubelet[2166]: W0209 19:45:40.708365 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.708533 kubelet[2166]: E0209 19:45:40.708383 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.708533 kubelet[2166]: E0209 19:45:40.708517 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.708533 kubelet[2166]: W0209 19:45:40.708524 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.708533 kubelet[2166]: E0209 19:45:40.708538 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.708661 kubelet[2166]: E0209 19:45:40.708657 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.708698 kubelet[2166]: W0209 19:45:40.708664 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.708698 kubelet[2166]: E0209 19:45:40.708678 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.708833 kubelet[2166]: E0209 19:45:40.708817 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.708833 kubelet[2166]: W0209 19:45:40.708827 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.708915 kubelet[2166]: E0209 19:45:40.708843 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.708972 kubelet[2166]: E0209 19:45:40.708958 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.708972 kubelet[2166]: W0209 19:45:40.708968 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.709029 kubelet[2166]: E0209 19:45:40.708982 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.709132 kubelet[2166]: E0209 19:45:40.709113 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.709132 kubelet[2166]: W0209 19:45:40.709129 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.709199 kubelet[2166]: E0209 19:45:40.709152 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.709252 kubelet[2166]: E0209 19:45:40.709239 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.709252 kubelet[2166]: W0209 19:45:40.709250 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.709326 kubelet[2166]: E0209 19:45:40.709264 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.709447 kubelet[2166]: E0209 19:45:40.709430 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.709447 kubelet[2166]: W0209 19:45:40.709441 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.709543 kubelet[2166]: E0209 19:45:40.709457 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.709606 kubelet[2166]: E0209 19:45:40.709590 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.709606 kubelet[2166]: W0209 19:45:40.709601 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.709694 kubelet[2166]: E0209 19:45:40.709614 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.709797 kubelet[2166]: E0209 19:45:40.709775 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.709797 kubelet[2166]: W0209 19:45:40.709788 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.709797 kubelet[2166]: E0209 19:45:40.709797 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.710084 kubelet[2166]: E0209 19:45:40.710065 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.710084 kubelet[2166]: W0209 19:45:40.710080 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.710180 kubelet[2166]: E0209 19:45:40.710100 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.710372 kubelet[2166]: E0209 19:45:40.710355 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.710372 kubelet[2166]: W0209 19:45:40.710367 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.710550 kubelet[2166]: E0209 19:45:40.710383 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.710550 kubelet[2166]: E0209 19:45:40.710546 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.710550 kubelet[2166]: W0209 19:45:40.710552 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.710650 kubelet[2166]: E0209 19:45:40.710562 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.710650 kubelet[2166]: I0209 19:45:40.710579 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/991e9420-1ee3-42d4-b3be-2ddc6b5f52db-varrun\") pod \"csi-node-driver-ngqgb\" (UID: \"991e9420-1ee3-42d4-b3be-2ddc6b5f52db\") " pod="calico-system/csi-node-driver-ngqgb" Feb 9 19:45:40.710714 kubelet[2166]: E0209 19:45:40.710674 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.710714 kubelet[2166]: W0209 19:45:40.710682 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.710714 kubelet[2166]: E0209 19:45:40.710691 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.710810 kubelet[2166]: E0209 19:45:40.710788 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.710810 kubelet[2166]: W0209 19:45:40.710794 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.710810 kubelet[2166]: E0209 19:45:40.710802 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.710914 kubelet[2166]: E0209 19:45:40.710898 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.710914 kubelet[2166]: W0209 19:45:40.710904 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.710914 kubelet[2166]: E0209 19:45:40.710913 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.711005 kubelet[2166]: I0209 19:45:40.710929 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/991e9420-1ee3-42d4-b3be-2ddc6b5f52db-socket-dir\") pod \"csi-node-driver-ngqgb\" (UID: \"991e9420-1ee3-42d4-b3be-2ddc6b5f52db\") " pod="calico-system/csi-node-driver-ngqgb" Feb 9 19:45:40.711070 kubelet[2166]: E0209 19:45:40.711039 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.711070 kubelet[2166]: W0209 19:45:40.711056 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.711070 kubelet[2166]: E0209 19:45:40.711067 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.711193 kubelet[2166]: E0209 19:45:40.711162 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.711193 kubelet[2166]: W0209 19:45:40.711167 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.711193 kubelet[2166]: E0209 19:45:40.711175 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.711285 kubelet[2166]: E0209 19:45:40.711272 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.711285 kubelet[2166]: W0209 19:45:40.711277 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.711285 kubelet[2166]: E0209 19:45:40.711285 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.711408 kubelet[2166]: E0209 19:45:40.711372 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.711408 kubelet[2166]: W0209 19:45:40.711384 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.711408 kubelet[2166]: E0209 19:45:40.711401 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.711528 kubelet[2166]: E0209 19:45:40.711490 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.711528 kubelet[2166]: W0209 19:45:40.711495 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.711528 kubelet[2166]: E0209 19:45:40.711503 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.711620 kubelet[2166]: E0209 19:45:40.711599 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.711620 kubelet[2166]: W0209 19:45:40.711605 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.711620 kubelet[2166]: E0209 19:45:40.711613 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.711966 kubelet[2166]: E0209 19:45:40.711953 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.712049 kubelet[2166]: W0209 19:45:40.712033 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.712199 kubelet[2166]: E0209 19:45:40.712187 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.712378 kubelet[2166]: E0209 19:45:40.712366 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.712480 kubelet[2166]: W0209 19:45:40.712464 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.712621 kubelet[2166]: E0209 19:45:40.712609 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.712803 kubelet[2166]: E0209 19:45:40.712792 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.712885 kubelet[2166]: W0209 19:45:40.712869 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.713028 kubelet[2166]: E0209 19:45:40.713016 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.713497 kubelet[2166]: E0209 19:45:40.713486 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.713586 kubelet[2166]: W0209 19:45:40.713570 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.713732 kubelet[2166]: E0209 19:45:40.713721 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.713900 kubelet[2166]: E0209 19:45:40.713890 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.713980 kubelet[2166]: W0209 19:45:40.713965 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.714115 kubelet[2166]: E0209 19:45:40.714104 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.714288 kubelet[2166]: E0209 19:45:40.714278 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.714373 kubelet[2166]: W0209 19:45:40.714357 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.714528 kubelet[2166]: E0209 19:45:40.714518 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.714699 kubelet[2166]: E0209 19:45:40.714689 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.714779 kubelet[2166]: W0209 19:45:40.714763 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.714919 kubelet[2166]: E0209 19:45:40.714908 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.715092 kubelet[2166]: E0209 19:45:40.715081 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.715179 kubelet[2166]: W0209 19:45:40.715164 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.715331 kubelet[2166]: E0209 19:45:40.715319 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.715535 kubelet[2166]: E0209 19:45:40.715523 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.715617 kubelet[2166]: W0209 19:45:40.715601 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.715759 kubelet[2166]: E0209 19:45:40.715748 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.715939 kubelet[2166]: E0209 19:45:40.715929 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.716026 kubelet[2166]: W0209 19:45:40.716010 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.716176 kubelet[2166]: E0209 19:45:40.716164 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.716359 kubelet[2166]: E0209 19:45:40.716348 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.716452 kubelet[2166]: W0209 19:45:40.716436 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.716593 kubelet[2166]: E0209 19:45:40.716582 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.716784 kubelet[2166]: E0209 19:45:40.716773 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.716863 kubelet[2166]: W0209 19:45:40.716848 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.716995 kubelet[2166]: E0209 19:45:40.716985 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.717214 kubelet[2166]: E0209 19:45:40.717204 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.717300 kubelet[2166]: W0209 19:45:40.717284 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.717453 kubelet[2166]: E0209 19:45:40.717441 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.717629 kubelet[2166]: E0209 19:45:40.717619 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.717709 kubelet[2166]: W0209 19:45:40.717693 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.717845 kubelet[2166]: E0209 19:45:40.717834 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.718046 kubelet[2166]: E0209 19:45:40.718035 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.718136 kubelet[2166]: W0209 19:45:40.718111 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.718294 kubelet[2166]: E0209 19:45:40.718263 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.718748 kubelet[2166]: E0209 19:45:40.718736 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.718838 kubelet[2166]: W0209 19:45:40.718821 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.719011 kubelet[2166]: E0209 19:45:40.718983 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.719168 kubelet[2166]: E0209 19:45:40.719156 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.719251 kubelet[2166]: W0209 19:45:40.719235 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.719462 kubelet[2166]: E0209 19:45:40.719378 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.719613 kubelet[2166]: E0209 19:45:40.719601 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.719694 kubelet[2166]: W0209 19:45:40.719678 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.719804 kubelet[2166]: E0209 19:45:40.719782 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.720023 kubelet[2166]: E0209 19:45:40.720012 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.720110 kubelet[2166]: W0209 19:45:40.720094 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.720268 kubelet[2166]: E0209 19:45:40.720256 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.720481 kubelet[2166]: E0209 19:45:40.720469 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.720565 kubelet[2166]: W0209 19:45:40.720547 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.720665 kubelet[2166]: E0209 19:45:40.720651 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.721012 kubelet[2166]: E0209 19:45:40.720906 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.721012 kubelet[2166]: W0209 19:45:40.720919 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.721012 kubelet[2166]: E0209 19:45:40.720932 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.812303 kubelet[2166]: E0209 19:45:40.812272 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.812303 kubelet[2166]: W0209 19:45:40.812298 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.812496 kubelet[2166]: E0209 19:45:40.812329 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.812640 kubelet[2166]: E0209 19:45:40.812624 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.812699 kubelet[2166]: W0209 19:45:40.812640 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.812699 kubelet[2166]: E0209 19:45:40.812668 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.812891 kubelet[2166]: E0209 19:45:40.812876 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.812891 kubelet[2166]: W0209 19:45:40.812887 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.812980 kubelet[2166]: E0209 19:45:40.812914 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.813133 kubelet[2166]: E0209 19:45:40.813108 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.813133 kubelet[2166]: W0209 19:45:40.813125 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.813213 kubelet[2166]: E0209 19:45:40.813144 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.813365 kubelet[2166]: E0209 19:45:40.813348 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.813365 kubelet[2166]: W0209 19:45:40.813360 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.813472 kubelet[2166]: E0209 19:45:40.813398 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.813611 kubelet[2166]: E0209 19:45:40.813590 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.813611 kubelet[2166]: W0209 19:45:40.813609 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.813698 kubelet[2166]: E0209 19:45:40.813635 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.813853 kubelet[2166]: E0209 19:45:40.813828 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.813853 kubelet[2166]: W0209 19:45:40.813846 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.814068 kubelet[2166]: E0209 19:45:40.813868 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.814068 kubelet[2166]: E0209 19:45:40.814028 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.814068 kubelet[2166]: W0209 19:45:40.814037 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.814068 kubelet[2166]: E0209 19:45:40.814053 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.814229 kubelet[2166]: E0209 19:45:40.814213 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.814229 kubelet[2166]: W0209 19:45:40.814226 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.814295 kubelet[2166]: E0209 19:45:40.814251 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.814499 kubelet[2166]: E0209 19:45:40.814483 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.814499 kubelet[2166]: W0209 19:45:40.814496 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.814592 kubelet[2166]: E0209 19:45:40.814528 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.814698 kubelet[2166]: E0209 19:45:40.814681 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.814698 kubelet[2166]: W0209 19:45:40.814693 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.814783 kubelet[2166]: E0209 19:45:40.814719 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.814863 kubelet[2166]: E0209 19:45:40.814847 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.814863 kubelet[2166]: W0209 19:45:40.814860 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.814931 kubelet[2166]: E0209 19:45:40.814886 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.815019 kubelet[2166]: E0209 19:45:40.815005 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.815019 kubelet[2166]: W0209 19:45:40.815017 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.815084 kubelet[2166]: E0209 19:45:40.815044 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.815190 kubelet[2166]: E0209 19:45:40.815173 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.815190 kubelet[2166]: W0209 19:45:40.815185 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.815275 kubelet[2166]: E0209 19:45:40.815201 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.815360 kubelet[2166]: E0209 19:45:40.815343 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.815360 kubelet[2166]: W0209 19:45:40.815356 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.815463 kubelet[2166]: E0209 19:45:40.815376 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.815566 kubelet[2166]: E0209 19:45:40.815551 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.815566 kubelet[2166]: W0209 19:45:40.815563 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.815633 kubelet[2166]: E0209 19:45:40.815579 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.815812 kubelet[2166]: E0209 19:45:40.815795 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.815812 kubelet[2166]: W0209 19:45:40.815808 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.815898 kubelet[2166]: E0209 19:45:40.815825 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.816010 kubelet[2166]: E0209 19:45:40.815994 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.816010 kubelet[2166]: W0209 19:45:40.816006 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.816074 kubelet[2166]: E0209 19:45:40.816022 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.816185 kubelet[2166]: E0209 19:45:40.816168 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.816185 kubelet[2166]: W0209 19:45:40.816180 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.816273 kubelet[2166]: E0209 19:45:40.816210 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.816342 kubelet[2166]: E0209 19:45:40.816325 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.816342 kubelet[2166]: W0209 19:45:40.816337 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.816435 kubelet[2166]: E0209 19:45:40.816363 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.816515 kubelet[2166]: E0209 19:45:40.816498 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.816515 kubelet[2166]: W0209 19:45:40.816510 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.816603 kubelet[2166]: E0209 19:45:40.816535 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.816677 kubelet[2166]: E0209 19:45:40.816662 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.816677 kubelet[2166]: W0209 19:45:40.816673 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.816740 kubelet[2166]: E0209 19:45:40.816690 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.816941 kubelet[2166]: E0209 19:45:40.816926 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.816941 kubelet[2166]: W0209 19:45:40.816939 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.817043 kubelet[2166]: E0209 19:45:40.816956 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.817128 kubelet[2166]: E0209 19:45:40.817105 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.817128 kubelet[2166]: W0209 19:45:40.817125 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.817200 kubelet[2166]: E0209 19:45:40.817143 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.817335 kubelet[2166]: E0209 19:45:40.817314 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.817335 kubelet[2166]: W0209 19:45:40.817328 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.817423 kubelet[2166]: E0209 19:45:40.817344 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.817537 kubelet[2166]: E0209 19:45:40.817524 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.817537 kubelet[2166]: W0209 19:45:40.817536 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.817599 kubelet[2166]: E0209 19:45:40.817552 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.817726 kubelet[2166]: E0209 19:45:40.817713 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.817726 kubelet[2166]: W0209 19:45:40.817725 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.817795 kubelet[2166]: E0209 19:45:40.817737 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.855754 kubelet[2166]: E0209 19:45:40.853528 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.855754 kubelet[2166]: W0209 19:45:40.853543 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.855754 kubelet[2166]: E0209 19:45:40.853563 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.916366 kubelet[2166]: E0209 19:45:40.916322 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.916366 kubelet[2166]: W0209 19:45:40.916350 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.916366 kubelet[2166]: E0209 19:45:40.916376 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:40.916609 kubelet[2166]: E0209 19:45:40.916540 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:40.916609 kubelet[2166]: W0209 19:45:40.916549 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:40.916609 kubelet[2166]: E0209 19:45:40.916561 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:41.017497 kubelet[2166]: E0209 19:45:41.017437 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:41.017497 kubelet[2166]: W0209 19:45:41.017458 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:41.017497 kubelet[2166]: E0209 19:45:41.017479 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:41.017918 kubelet[2166]: E0209 19:45:41.017904 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:41.017918 kubelet[2166]: W0209 19:45:41.017912 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:41.017918 kubelet[2166]: E0209 19:45:41.017921 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:41.054171 kubelet[2166]: E0209 19:45:41.051460 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:41.054171 kubelet[2166]: W0209 19:45:41.051474 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:41.054171 kubelet[2166]: E0209 19:45:41.051491 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:41.073557 kubelet[2166]: E0209 19:45:41.073535 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:41.073943 env[1210]: time="2024-02-09T19:45:41.073898666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cf82t,Uid:7bc67f2a-5298-4305-9a52-654acbff2b06,Namespace:calico-system,Attempt:0,}" Feb 9 19:45:41.118668 kubelet[2166]: E0209 19:45:41.118570 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:41.118668 kubelet[2166]: W0209 19:45:41.118588 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:41.118668 kubelet[2166]: E0209 19:45:41.118605 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:41.219794 kubelet[2166]: E0209 19:45:41.219765 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:41.219794 kubelet[2166]: W0209 19:45:41.219786 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:41.219794 kubelet[2166]: E0209 19:45:41.219806 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:41.252877 kubelet[2166]: E0209 19:45:41.252857 2166 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 19:45:41.252877 kubelet[2166]: W0209 19:45:41.252872 2166 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 19:45:41.253018 kubelet[2166]: E0209 19:45:41.252887 2166 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 19:45:41.324826 env[1210]: time="2024-02-09T19:45:41.324742366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:41.324998 env[1210]: time="2024-02-09T19:45:41.324802600Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:41.324998 env[1210]: time="2024-02-09T19:45:41.324813671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:41.324998 env[1210]: time="2024-02-09T19:45:41.324974484Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6 pid=2735 runtime=io.containerd.runc.v2 Feb 9 19:45:41.344314 kubelet[2166]: E0209 19:45:41.344279 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:41.344797 env[1210]: time="2024-02-09T19:45:41.344759272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b4ddfb9d9-qdfjk,Uid:0adbd313-a370-496b-9907-dd54c7fab656,Namespace:calico-system,Attempt:0,}" Feb 9 19:45:41.403668 env[1210]: time="2024-02-09T19:45:41.403553071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cf82t,Uid:7bc67f2a-5298-4305-9a52-654acbff2b06,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6\"" Feb 9 19:45:41.404442 kubelet[2166]: E0209 19:45:41.404417 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:41.406043 env[1210]: time="2024-02-09T19:45:41.405370520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 19:45:41.410000 audit[2793]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=2793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:41.410000 audit[2793]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffd52ca0260 a2=0 a3=7ffd52ca024c items=0 ppid=2345 pid=2793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:41.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:41.410000 audit[2793]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=2793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:41.410000 audit[2793]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffd52ca0260 a2=0 a3=7ffd52ca024c items=0 ppid=2345 pid=2793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:41.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:41.564239 env[1210]: time="2024-02-09T19:45:41.564163260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:41.564455 env[1210]: time="2024-02-09T19:45:41.564221449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:41.564455 env[1210]: time="2024-02-09T19:45:41.564237520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:41.564583 env[1210]: time="2024-02-09T19:45:41.564459468Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3 pid=2801 runtime=io.containerd.runc.v2 Feb 9 19:45:41.621137 env[1210]: time="2024-02-09T19:45:41.621069588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b4ddfb9d9-qdfjk,Uid:0adbd313-a370-496b-9907-dd54c7fab656,Namespace:calico-system,Attempt:0,} returns sandbox id \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\"" Feb 9 19:45:41.621959 kubelet[2166]: E0209 19:45:41.621931 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:42.028720 kubelet[2166]: E0209 19:45:42.028694 2166 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngqgb" podUID=991e9420-1ee3-42d4-b3be-2ddc6b5f52db Feb 9 19:45:44.028352 kubelet[2166]: E0209 19:45:44.028316 2166 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngqgb" podUID=991e9420-1ee3-42d4-b3be-2ddc6b5f52db Feb 9 19:45:45.387121 env[1210]: time="2024-02-09T19:45:45.387076070Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:45.389120 env[1210]: time="2024-02-09T19:45:45.389091206Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:45.391103 env[1210]: time="2024-02-09T19:45:45.391073539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:45.393506 env[1210]: time="2024-02-09T19:45:45.393459133Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:45.394463 env[1210]: time="2024-02-09T19:45:45.394424742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:6506d2e0be2d5ec9cb8dbe00c4b4f037c67b6ab4ec14a1f0c83333ac51f4da9a\"" Feb 9 19:45:45.395723 env[1210]: time="2024-02-09T19:45:45.395688412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 9 19:45:45.396910 env[1210]: time="2024-02-09T19:45:45.396870450Z" level=info msg="CreateContainer within sandbox \"3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 19:45:45.410317 env[1210]: time="2024-02-09T19:45:45.410274869Z" level=info msg="CreateContainer within sandbox \"3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1afd708a2a8099b9d7c84c9fecd1448dde506f8cd5cd83aa9023c470c12e1107\"" Feb 9 19:45:45.410707 env[1210]: time="2024-02-09T19:45:45.410671206Z" level=info msg="StartContainer for \"1afd708a2a8099b9d7c84c9fecd1448dde506f8cd5cd83aa9023c470c12e1107\"" Feb 9 19:45:45.455609 env[1210]: time="2024-02-09T19:45:45.455545825Z" level=info msg="StartContainer for \"1afd708a2a8099b9d7c84c9fecd1448dde506f8cd5cd83aa9023c470c12e1107\" returns successfully" Feb 9 19:45:45.524996 env[1210]: time="2024-02-09T19:45:45.524944048Z" level=info msg="shim disconnected" id=1afd708a2a8099b9d7c84c9fecd1448dde506f8cd5cd83aa9023c470c12e1107 Feb 9 19:45:45.524996 env[1210]: time="2024-02-09T19:45:45.524989515Z" level=warning msg="cleaning up after shim disconnected" id=1afd708a2a8099b9d7c84c9fecd1448dde506f8cd5cd83aa9023c470c12e1107 namespace=k8s.io Feb 9 19:45:45.524996 env[1210]: time="2024-02-09T19:45:45.524998401Z" level=info msg="cleaning up dead shim" Feb 9 19:45:45.532273 env[1210]: time="2024-02-09T19:45:45.532245192Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2884 runtime=io.containerd.runc.v2\n" Feb 9 19:45:46.028727 kubelet[2166]: E0209 19:45:46.028649 2166 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngqgb" podUID=991e9420-1ee3-42d4-b3be-2ddc6b5f52db Feb 9 19:45:46.075410 env[1210]: time="2024-02-09T19:45:46.075358182Z" level=info msg="StopPodSandbox for \"3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6\"" Feb 9 19:45:46.075565 env[1210]: time="2024-02-09T19:45:46.075436749Z" level=info msg="Container to stop \"1afd708a2a8099b9d7c84c9fecd1448dde506f8cd5cd83aa9023c470c12e1107\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:46.099340 env[1210]: time="2024-02-09T19:45:46.099269901Z" level=info msg="shim disconnected" id=3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6 Feb 9 19:45:46.099340 env[1210]: time="2024-02-09T19:45:46.099316378Z" level=warning msg="cleaning up after shim disconnected" id=3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6 namespace=k8s.io Feb 9 19:45:46.099340 env[1210]: time="2024-02-09T19:45:46.099324273Z" level=info msg="cleaning up dead shim" Feb 9 19:45:46.107291 env[1210]: time="2024-02-09T19:45:46.107216334Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2921 runtime=io.containerd.runc.v2\n" Feb 9 19:45:46.107557 env[1210]: time="2024-02-09T19:45:46.107528954Z" level=info msg="TearDown network for sandbox \"3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6\" successfully" Feb 9 19:45:46.107557 env[1210]: time="2024-02-09T19:45:46.107551987Z" level=info msg="StopPodSandbox for \"3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6\" returns successfully" Feb 9 19:45:46.155010 kubelet[2166]: I0209 19:45:46.154957 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-var-run-calico\") pod \"7bc67f2a-5298-4305-9a52-654acbff2b06\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " Feb 9 19:45:46.155010 kubelet[2166]: I0209 19:45:46.155008 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bc67f2a-5298-4305-9a52-654acbff2b06-tigera-ca-bundle\") pod \"7bc67f2a-5298-4305-9a52-654acbff2b06\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " Feb 9 19:45:46.155010 kubelet[2166]: I0209 19:45:46.155025 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-var-lib-calico\") pod \"7bc67f2a-5298-4305-9a52-654acbff2b06\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " Feb 9 19:45:46.155269 kubelet[2166]: I0209 19:45:46.155043 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-flexvol-driver-host\") pod \"7bc67f2a-5298-4305-9a52-654acbff2b06\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " Feb 9 19:45:46.155269 kubelet[2166]: I0209 19:45:46.155074 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-cni-bin-dir\") pod \"7bc67f2a-5298-4305-9a52-654acbff2b06\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " Feb 9 19:45:46.155269 kubelet[2166]: I0209 19:45:46.155074 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "7bc67f2a-5298-4305-9a52-654acbff2b06" (UID: "7bc67f2a-5298-4305-9a52-654acbff2b06"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:46.155269 kubelet[2166]: I0209 19:45:46.155102 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-lib-modules\") pod \"7bc67f2a-5298-4305-9a52-654acbff2b06\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " Feb 9 19:45:46.155269 kubelet[2166]: I0209 19:45:46.155126 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-cni-net-dir\") pod \"7bc67f2a-5298-4305-9a52-654acbff2b06\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " Feb 9 19:45:46.156115 kubelet[2166]: I0209 19:45:46.155126 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "7bc67f2a-5298-4305-9a52-654acbff2b06" (UID: "7bc67f2a-5298-4305-9a52-654acbff2b06"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:46.156115 kubelet[2166]: I0209 19:45:46.155141 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-xtables-lock\") pod \"7bc67f2a-5298-4305-9a52-654acbff2b06\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " Feb 9 19:45:46.156115 kubelet[2166]: I0209 19:45:46.155143 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "7bc67f2a-5298-4305-9a52-654acbff2b06" (UID: "7bc67f2a-5298-4305-9a52-654acbff2b06"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:46.156115 kubelet[2166]: I0209 19:45:46.155170 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fv4b\" (UniqueName: \"kubernetes.io/projected/7bc67f2a-5298-4305-9a52-654acbff2b06-kube-api-access-8fv4b\") pod \"7bc67f2a-5298-4305-9a52-654acbff2b06\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " Feb 9 19:45:46.156115 kubelet[2166]: I0209 19:45:46.155179 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7bc67f2a-5298-4305-9a52-654acbff2b06" (UID: "7bc67f2a-5298-4305-9a52-654acbff2b06"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:46.156263 kubelet[2166]: I0209 19:45:46.155194 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "7bc67f2a-5298-4305-9a52-654acbff2b06" (UID: "7bc67f2a-5298-4305-9a52-654acbff2b06"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:46.156263 kubelet[2166]: I0209 19:45:46.155207 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7bc67f2a-5298-4305-9a52-654acbff2b06-node-certs\") pod \"7bc67f2a-5298-4305-9a52-654acbff2b06\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " Feb 9 19:45:46.156263 kubelet[2166]: I0209 19:45:46.155209 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7bc67f2a-5298-4305-9a52-654acbff2b06" (UID: "7bc67f2a-5298-4305-9a52-654acbff2b06"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:46.156263 kubelet[2166]: I0209 19:45:46.155227 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-cni-log-dir\") pod \"7bc67f2a-5298-4305-9a52-654acbff2b06\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " Feb 9 19:45:46.156263 kubelet[2166]: W0209 19:45:46.155204 2166 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/7bc67f2a-5298-4305-9a52-654acbff2b06/volumes/kubernetes.io~configmap/tigera-ca-bundle: clearQuota called, but quotas disabled Feb 9 19:45:46.156263 kubelet[2166]: I0209 19:45:46.155249 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-policysync\") pod \"7bc67f2a-5298-4305-9a52-654acbff2b06\" (UID: \"7bc67f2a-5298-4305-9a52-654acbff2b06\") " Feb 9 19:45:46.156484 kubelet[2166]: I0209 19:45:46.155333 2166 reconciler_common.go:295] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:46.156484 kubelet[2166]: I0209 19:45:46.155349 2166 reconciler_common.go:295] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:46.156484 kubelet[2166]: I0209 19:45:46.155423 2166 reconciler_common.go:295] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-var-run-calico\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:46.156484 kubelet[2166]: I0209 19:45:46.155420 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bc67f2a-5298-4305-9a52-654acbff2b06-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "7bc67f2a-5298-4305-9a52-654acbff2b06" (UID: "7bc67f2a-5298-4305-9a52-654acbff2b06"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:45:46.156484 kubelet[2166]: I0209 19:45:46.155443 2166 reconciler_common.go:295] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:46.156484 kubelet[2166]: I0209 19:45:46.155452 2166 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-lib-modules\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:46.156484 kubelet[2166]: I0209 19:45:46.155461 2166 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-xtables-lock\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:46.156715 kubelet[2166]: I0209 19:45:46.155467 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "7bc67f2a-5298-4305-9a52-654acbff2b06" (UID: "7bc67f2a-5298-4305-9a52-654acbff2b06"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:46.156715 kubelet[2166]: I0209 19:45:46.155527 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-policysync" (OuterVolumeSpecName: "policysync") pod "7bc67f2a-5298-4305-9a52-654acbff2b06" (UID: "7bc67f2a-5298-4305-9a52-654acbff2b06"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:46.156715 kubelet[2166]: I0209 19:45:46.155574 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "7bc67f2a-5298-4305-9a52-654acbff2b06" (UID: "7bc67f2a-5298-4305-9a52-654acbff2b06"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:45:46.157686 kubelet[2166]: I0209 19:45:46.157637 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bc67f2a-5298-4305-9a52-654acbff2b06-node-certs" (OuterVolumeSpecName: "node-certs") pod "7bc67f2a-5298-4305-9a52-654acbff2b06" (UID: "7bc67f2a-5298-4305-9a52-654acbff2b06"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:45:46.158295 kubelet[2166]: I0209 19:45:46.158222 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bc67f2a-5298-4305-9a52-654acbff2b06-kube-api-access-8fv4b" (OuterVolumeSpecName: "kube-api-access-8fv4b") pod "7bc67f2a-5298-4305-9a52-654acbff2b06" (UID: "7bc67f2a-5298-4305-9a52-654acbff2b06"). InnerVolumeSpecName "kube-api-access-8fv4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:45:46.255906 kubelet[2166]: I0209 19:45:46.255856 2166 reconciler_common.go:295] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bc67f2a-5298-4305-9a52-654acbff2b06-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:46.255906 kubelet[2166]: I0209 19:45:46.255897 2166 reconciler_common.go:295] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:46.255906 kubelet[2166]: I0209 19:45:46.255916 2166 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-8fv4b\" (UniqueName: \"kubernetes.io/projected/7bc67f2a-5298-4305-9a52-654acbff2b06-kube-api-access-8fv4b\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:46.256142 kubelet[2166]: I0209 19:45:46.255935 2166 reconciler_common.go:295] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7bc67f2a-5298-4305-9a52-654acbff2b06-node-certs\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:46.256142 kubelet[2166]: I0209 19:45:46.255949 2166 reconciler_common.go:295] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:46.256142 kubelet[2166]: I0209 19:45:46.255962 2166 reconciler_common.go:295] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7bc67f2a-5298-4305-9a52-654acbff2b06-policysync\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:46.405862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1afd708a2a8099b9d7c84c9fecd1448dde506f8cd5cd83aa9023c470c12e1107-rootfs.mount: Deactivated successfully. Feb 9 19:45:46.406003 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6-rootfs.mount: Deactivated successfully. Feb 9 19:45:46.406090 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6-shm.mount: Deactivated successfully. Feb 9 19:45:46.406219 systemd[1]: var-lib-kubelet-pods-7bc67f2a\x2d5298\x2d4305\x2d9a52\x2d654acbff2b06-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8fv4b.mount: Deactivated successfully. Feb 9 19:45:46.406301 systemd[1]: var-lib-kubelet-pods-7bc67f2a\x2d5298\x2d4305\x2d9a52\x2d654acbff2b06-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Feb 9 19:45:47.077783 kubelet[2166]: I0209 19:45:47.077733 2166 scope.go:115] "RemoveContainer" containerID="1afd708a2a8099b9d7c84c9fecd1448dde506f8cd5cd83aa9023c470c12e1107" Feb 9 19:45:47.078719 env[1210]: time="2024-02-09T19:45:47.078678841Z" level=info msg="RemoveContainer for \"1afd708a2a8099b9d7c84c9fecd1448dde506f8cd5cd83aa9023c470c12e1107\"" Feb 9 19:45:47.196019 kubelet[2166]: I0209 19:45:47.195152 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:47.196019 kubelet[2166]: E0209 19:45:47.195206 2166 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7bc67f2a-5298-4305-9a52-654acbff2b06" containerName="flexvol-driver" Feb 9 19:45:47.196019 kubelet[2166]: I0209 19:45:47.195227 2166 memory_manager.go:346] "RemoveStaleState removing state" podUID="7bc67f2a-5298-4305-9a52-654acbff2b06" containerName="flexvol-driver" Feb 9 19:45:47.209451 env[1210]: time="2024-02-09T19:45:47.209405701Z" level=info msg="RemoveContainer for \"1afd708a2a8099b9d7c84c9fecd1448dde506f8cd5cd83aa9023c470c12e1107\" returns successfully" Feb 9 19:45:47.219184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968240243.mount: Deactivated successfully. Feb 9 19:45:47.261546 kubelet[2166]: I0209 19:45:47.261511 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9f7ccab1-78aa-4b44-9a6c-9c4abea2046b-var-lib-calico\") pod \"calico-node-m6hf5\" (UID: \"9f7ccab1-78aa-4b44-9a6c-9c4abea2046b\") " pod="calico-system/calico-node-m6hf5" Feb 9 19:45:47.261546 kubelet[2166]: I0209 19:45:47.261556 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9f7ccab1-78aa-4b44-9a6c-9c4abea2046b-cni-bin-dir\") pod \"calico-node-m6hf5\" (UID: \"9f7ccab1-78aa-4b44-9a6c-9c4abea2046b\") " pod="calico-system/calico-node-m6hf5" Feb 9 19:45:47.261715 kubelet[2166]: I0209 19:45:47.261589 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9f7ccab1-78aa-4b44-9a6c-9c4abea2046b-flexvol-driver-host\") pod \"calico-node-m6hf5\" (UID: \"9f7ccab1-78aa-4b44-9a6c-9c4abea2046b\") " pod="calico-system/calico-node-m6hf5" Feb 9 19:45:47.261715 kubelet[2166]: I0209 19:45:47.261668 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdk7d\" (UniqueName: \"kubernetes.io/projected/9f7ccab1-78aa-4b44-9a6c-9c4abea2046b-kube-api-access-mdk7d\") pod \"calico-node-m6hf5\" (UID: \"9f7ccab1-78aa-4b44-9a6c-9c4abea2046b\") " pod="calico-system/calico-node-m6hf5" Feb 9 19:45:47.261767 kubelet[2166]: I0209 19:45:47.261735 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9f7ccab1-78aa-4b44-9a6c-9c4abea2046b-node-certs\") pod \"calico-node-m6hf5\" (UID: \"9f7ccab1-78aa-4b44-9a6c-9c4abea2046b\") " pod="calico-system/calico-node-m6hf5" Feb 9 19:45:47.261807 kubelet[2166]: I0209 19:45:47.261789 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9f7ccab1-78aa-4b44-9a6c-9c4abea2046b-var-run-calico\") pod \"calico-node-m6hf5\" (UID: \"9f7ccab1-78aa-4b44-9a6c-9c4abea2046b\") " pod="calico-system/calico-node-m6hf5" Feb 9 19:45:47.261931 kubelet[2166]: I0209 19:45:47.261905 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f7ccab1-78aa-4b44-9a6c-9c4abea2046b-lib-modules\") pod \"calico-node-m6hf5\" (UID: \"9f7ccab1-78aa-4b44-9a6c-9c4abea2046b\") " pod="calico-system/calico-node-m6hf5" Feb 9 19:45:47.261976 kubelet[2166]: I0209 19:45:47.261964 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f7ccab1-78aa-4b44-9a6c-9c4abea2046b-xtables-lock\") pod \"calico-node-m6hf5\" (UID: \"9f7ccab1-78aa-4b44-9a6c-9c4abea2046b\") " pod="calico-system/calico-node-m6hf5" Feb 9 19:45:47.262051 kubelet[2166]: I0209 19:45:47.262018 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9f7ccab1-78aa-4b44-9a6c-9c4abea2046b-policysync\") pod \"calico-node-m6hf5\" (UID: \"9f7ccab1-78aa-4b44-9a6c-9c4abea2046b\") " pod="calico-system/calico-node-m6hf5" Feb 9 19:45:47.262200 kubelet[2166]: I0209 19:45:47.262068 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f7ccab1-78aa-4b44-9a6c-9c4abea2046b-tigera-ca-bundle\") pod \"calico-node-m6hf5\" (UID: \"9f7ccab1-78aa-4b44-9a6c-9c4abea2046b\") " pod="calico-system/calico-node-m6hf5" Feb 9 19:45:47.262200 kubelet[2166]: I0209 19:45:47.262097 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9f7ccab1-78aa-4b44-9a6c-9c4abea2046b-cni-log-dir\") pod \"calico-node-m6hf5\" (UID: \"9f7ccab1-78aa-4b44-9a6c-9c4abea2046b\") " pod="calico-system/calico-node-m6hf5" Feb 9 19:45:47.262200 kubelet[2166]: I0209 19:45:47.262125 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9f7ccab1-78aa-4b44-9a6c-9c4abea2046b-cni-net-dir\") pod \"calico-node-m6hf5\" (UID: \"9f7ccab1-78aa-4b44-9a6c-9c4abea2046b\") " pod="calico-system/calico-node-m6hf5" Feb 9 19:45:47.499161 kubelet[2166]: E0209 19:45:47.499124 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:47.499535 env[1210]: time="2024-02-09T19:45:47.499501854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m6hf5,Uid:9f7ccab1-78aa-4b44-9a6c-9c4abea2046b,Namespace:calico-system,Attempt:0,}" Feb 9 19:45:47.511087 env[1210]: time="2024-02-09T19:45:47.511016401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:47.511087 env[1210]: time="2024-02-09T19:45:47.511059722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:47.511087 env[1210]: time="2024-02-09T19:45:47.511069772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:47.511269 env[1210]: time="2024-02-09T19:45:47.511212681Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a34bb1a089f224baa630bb3b40bb97ddd1e11e9b3c6e083f5aa48e2f32b1b96 pid=2950 runtime=io.containerd.runc.v2 Feb 9 19:45:47.540119 env[1210]: time="2024-02-09T19:45:47.540070589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m6hf5,Uid:9f7ccab1-78aa-4b44-9a6c-9c4abea2046b,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a34bb1a089f224baa630bb3b40bb97ddd1e11e9b3c6e083f5aa48e2f32b1b96\"" Feb 9 19:45:47.540720 kubelet[2166]: E0209 19:45:47.540694 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:47.542297 env[1210]: time="2024-02-09T19:45:47.542253660Z" level=info msg="CreateContainer within sandbox \"8a34bb1a089f224baa630bb3b40bb97ddd1e11e9b3c6e083f5aa48e2f32b1b96\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 19:45:47.566696 env[1210]: time="2024-02-09T19:45:47.566650018Z" level=info msg="CreateContainer within sandbox \"8a34bb1a089f224baa630bb3b40bb97ddd1e11e9b3c6e083f5aa48e2f32b1b96\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c79772f3cb4d62defccc5642cf83152ec7087e5beab27d863e0fb3bdcdf6e62a\"" Feb 9 19:45:47.567178 env[1210]: time="2024-02-09T19:45:47.567148267Z" level=info msg="StartContainer for \"c79772f3cb4d62defccc5642cf83152ec7087e5beab27d863e0fb3bdcdf6e62a\"" Feb 9 19:45:47.611906 env[1210]: time="2024-02-09T19:45:47.611823653Z" level=info msg="StartContainer for \"c79772f3cb4d62defccc5642cf83152ec7087e5beab27d863e0fb3bdcdf6e62a\" returns successfully" Feb 9 19:45:47.663893 env[1210]: time="2024-02-09T19:45:47.663835753Z" level=info msg="shim disconnected" id=c79772f3cb4d62defccc5642cf83152ec7087e5beab27d863e0fb3bdcdf6e62a Feb 9 19:45:47.663893 env[1210]: time="2024-02-09T19:45:47.663878293Z" level=warning msg="cleaning up after shim disconnected" id=c79772f3cb4d62defccc5642cf83152ec7087e5beab27d863e0fb3bdcdf6e62a namespace=k8s.io Feb 9 19:45:47.663893 env[1210]: time="2024-02-09T19:45:47.663887230Z" level=info msg="cleaning up dead shim" Feb 9 19:45:47.670694 env[1210]: time="2024-02-09T19:45:47.670651304Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3028 runtime=io.containerd.runc.v2\n" Feb 9 19:45:48.028969 kubelet[2166]: E0209 19:45:48.028929 2166 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngqgb" podUID=991e9420-1ee3-42d4-b3be-2ddc6b5f52db Feb 9 19:45:48.085355 kubelet[2166]: E0209 19:45:48.085334 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:49.032046 kubelet[2166]: I0209 19:45:49.032019 2166 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=7bc67f2a-5298-4305-9a52-654acbff2b06 path="/var/lib/kubelet/pods/7bc67f2a-5298-4305-9a52-654acbff2b06/volumes" Feb 9 19:45:49.780466 env[1210]: time="2024-02-09T19:45:49.780405861Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:49.782071 env[1210]: time="2024-02-09T19:45:49.782043723Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:49.783684 env[1210]: time="2024-02-09T19:45:49.783638504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:49.785321 env[1210]: time="2024-02-09T19:45:49.785299269Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:49.785926 env[1210]: time="2024-02-09T19:45:49.785892505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:b33768e0da1f8a5788a6a5d8ac2dcf15292ea9f3717de450f946c0a055b3532c\"" Feb 9 19:45:49.787231 env[1210]: time="2024-02-09T19:45:49.787077936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 19:45:49.794556 env[1210]: time="2024-02-09T19:45:49.794511864Z" level=info msg="CreateContainer within sandbox \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 19:45:49.804497 env[1210]: time="2024-02-09T19:45:49.804455806Z" level=info msg="CreateContainer within sandbox \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\"" Feb 9 19:45:49.805969 env[1210]: time="2024-02-09T19:45:49.804802328Z" level=info msg="StartContainer for \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\"" Feb 9 19:45:50.028247 kubelet[2166]: E0209 19:45:50.027919 2166 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngqgb" podUID=991e9420-1ee3-42d4-b3be-2ddc6b5f52db Feb 9 19:45:50.129525 env[1210]: time="2024-02-09T19:45:50.129488908Z" level=info msg="StartContainer for \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\" returns successfully" Feb 9 19:45:50.792623 systemd[1]: run-containerd-runc-k8s.io-8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04-runc.dIh3Wj.mount: Deactivated successfully. Feb 9 19:45:51.133084 env[1210]: time="2024-02-09T19:45:51.133057013Z" level=info msg="StopContainer for \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\" with timeout 300 (s)" Feb 9 19:45:51.133442 env[1210]: time="2024-02-09T19:45:51.133405949Z" level=info msg="Stop container \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\" with signal terminated" Feb 9 19:45:51.156547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04-rootfs.mount: Deactivated successfully. Feb 9 19:45:51.159787 env[1210]: time="2024-02-09T19:45:51.159748005Z" level=info msg="shim disconnected" id=8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04 Feb 9 19:45:51.159860 env[1210]: time="2024-02-09T19:45:51.159787931Z" level=warning msg="cleaning up after shim disconnected" id=8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04 namespace=k8s.io Feb 9 19:45:51.159860 env[1210]: time="2024-02-09T19:45:51.159797850Z" level=info msg="cleaning up dead shim" Feb 9 19:45:51.165881 env[1210]: time="2024-02-09T19:45:51.165858879Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3111 runtime=io.containerd.runc.v2\n" Feb 9 19:45:51.168221 env[1210]: time="2024-02-09T19:45:51.168192619Z" level=info msg="StopContainer for \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\" returns successfully" Feb 9 19:45:51.168800 env[1210]: time="2024-02-09T19:45:51.168771578Z" level=info msg="StopPodSandbox for \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\"" Feb 9 19:45:51.168855 env[1210]: time="2024-02-09T19:45:51.168831791Z" level=info msg="Container to stop \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:45:51.170552 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3-shm.mount: Deactivated successfully. Feb 9 19:45:51.186441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3-rootfs.mount: Deactivated successfully. Feb 9 19:45:51.188624 env[1210]: time="2024-02-09T19:45:51.188586218Z" level=info msg="shim disconnected" id=005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3 Feb 9 19:45:51.188704 env[1210]: time="2024-02-09T19:45:51.188627445Z" level=warning msg="cleaning up after shim disconnected" id=005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3 namespace=k8s.io Feb 9 19:45:51.188704 env[1210]: time="2024-02-09T19:45:51.188637313Z" level=info msg="cleaning up dead shim" Feb 9 19:45:51.194180 env[1210]: time="2024-02-09T19:45:51.194155582Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3143 runtime=io.containerd.runc.v2\n" Feb 9 19:45:51.194442 env[1210]: time="2024-02-09T19:45:51.194379594Z" level=info msg="TearDown network for sandbox \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\" successfully" Feb 9 19:45:51.194442 env[1210]: time="2024-02-09T19:45:51.194412255Z" level=info msg="StopPodSandbox for \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\" returns successfully" Feb 9 19:45:51.289077 kubelet[2166]: I0209 19:45:51.289042 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0adbd313-a370-496b-9907-dd54c7fab656-typha-certs\") pod \"0adbd313-a370-496b-9907-dd54c7fab656\" (UID: \"0adbd313-a370-496b-9907-dd54c7fab656\") " Feb 9 19:45:51.289077 kubelet[2166]: I0209 19:45:51.289084 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p8x9\" (UniqueName: \"kubernetes.io/projected/0adbd313-a370-496b-9907-dd54c7fab656-kube-api-access-9p8x9\") pod \"0adbd313-a370-496b-9907-dd54c7fab656\" (UID: \"0adbd313-a370-496b-9907-dd54c7fab656\") " Feb 9 19:45:51.289435 kubelet[2166]: I0209 19:45:51.289108 2166 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0adbd313-a370-496b-9907-dd54c7fab656-tigera-ca-bundle\") pod \"0adbd313-a370-496b-9907-dd54c7fab656\" (UID: \"0adbd313-a370-496b-9907-dd54c7fab656\") " Feb 9 19:45:51.291576 kubelet[2166]: I0209 19:45:51.291539 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0adbd313-a370-496b-9907-dd54c7fab656-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "0adbd313-a370-496b-9907-dd54c7fab656" (UID: "0adbd313-a370-496b-9907-dd54c7fab656"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:45:51.291735 kubelet[2166]: I0209 19:45:51.291716 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0adbd313-a370-496b-9907-dd54c7fab656-kube-api-access-9p8x9" (OuterVolumeSpecName: "kube-api-access-9p8x9") pod "0adbd313-a370-496b-9907-dd54c7fab656" (UID: "0adbd313-a370-496b-9907-dd54c7fab656"). InnerVolumeSpecName "kube-api-access-9p8x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:45:51.292204 kubelet[2166]: W0209 19:45:51.292181 2166 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/0adbd313-a370-496b-9907-dd54c7fab656/volumes/kubernetes.io~configmap/tigera-ca-bundle: clearQuota called, but quotas disabled Feb 9 19:45:51.292360 kubelet[2166]: I0209 19:45:51.292340 2166 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0adbd313-a370-496b-9907-dd54c7fab656-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "0adbd313-a370-496b-9907-dd54c7fab656" (UID: "0adbd313-a370-496b-9907-dd54c7fab656"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:45:51.292997 systemd[1]: var-lib-kubelet-pods-0adbd313\x2da370\x2d496b\x2d9907\x2ddd54c7fab656-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Feb 9 19:45:51.293134 systemd[1]: var-lib-kubelet-pods-0adbd313\x2da370\x2d496b\x2d9907\x2ddd54c7fab656-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9p8x9.mount: Deactivated successfully. Feb 9 19:45:51.293220 systemd[1]: var-lib-kubelet-pods-0adbd313\x2da370\x2d496b\x2d9907\x2ddd54c7fab656-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Feb 9 19:45:51.389809 kubelet[2166]: I0209 19:45:51.389747 2166 reconciler_common.go:295] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0adbd313-a370-496b-9907-dd54c7fab656-typha-certs\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:51.389809 kubelet[2166]: I0209 19:45:51.389772 2166 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-9p8x9\" (UniqueName: \"kubernetes.io/projected/0adbd313-a370-496b-9907-dd54c7fab656-kube-api-access-9p8x9\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:51.389809 kubelet[2166]: I0209 19:45:51.389782 2166 reconciler_common.go:295] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0adbd313-a370-496b-9907-dd54c7fab656-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Feb 9 19:45:52.028229 kubelet[2166]: E0209 19:45:52.028169 2166 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngqgb" podUID=991e9420-1ee3-42d4-b3be-2ddc6b5f52db Feb 9 19:45:52.136047 kubelet[2166]: I0209 19:45:52.136005 2166 scope.go:115] "RemoveContainer" containerID="8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04" Feb 9 19:45:52.138227 env[1210]: time="2024-02-09T19:45:52.138178737Z" level=info msg="RemoveContainer for \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\"" Feb 9 19:45:52.141093 env[1210]: time="2024-02-09T19:45:52.141059074Z" level=info msg="RemoveContainer for \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\" returns successfully" Feb 9 19:45:52.141223 kubelet[2166]: I0209 19:45:52.141203 2166 scope.go:115] "RemoveContainer" containerID="8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04" Feb 9 19:45:52.141437 env[1210]: time="2024-02-09T19:45:52.141352705Z" level=error msg="ContainerStatus for \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\": not found" Feb 9 19:45:52.141597 kubelet[2166]: E0209 19:45:52.141581 2166 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\": not found" containerID="8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04" Feb 9 19:45:52.141659 kubelet[2166]: I0209 19:45:52.141623 2166 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04} err="failed to get container status \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\": rpc error: code = NotFound desc = an error occurred when try to find container \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\": not found" Feb 9 19:45:52.153122 kubelet[2166]: I0209 19:45:52.153081 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:52.153122 kubelet[2166]: E0209 19:45:52.153150 2166 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0adbd313-a370-496b-9907-dd54c7fab656" containerName="calico-typha" Feb 9 19:45:52.153335 kubelet[2166]: I0209 19:45:52.153184 2166 memory_manager.go:346] "RemoveStaleState removing state" podUID="0adbd313-a370-496b-9907-dd54c7fab656" containerName="calico-typha" Feb 9 19:45:52.185000 audit[3183]: NETFILTER_CFG table=filter:109 family=2 entries=14 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:52.188876 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 19:45:52.188936 kernel: audit: type=1325 audit(1707507952.185:280): table=filter:109 family=2 entries=14 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:52.188975 kernel: audit: type=1300 audit(1707507952.185:280): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffea45afd00 a2=0 a3=7ffea45afcec items=0 ppid=2345 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:52.185000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7ffea45afd00 a2=0 a3=7ffea45afcec items=0 ppid=2345 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:52.191981 kernel: audit: type=1327 audit(1707507952.185:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:52.185000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:52.193272 kubelet[2166]: I0209 19:45:52.193234 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3cb54617-8097-4780-bd56-028074a8ae3c-tigera-ca-bundle\") pod \"calico-typha-579498f6f9-6b85d\" (UID: \"3cb54617-8097-4780-bd56-028074a8ae3c\") " pod="calico-system/calico-typha-579498f6f9-6b85d" Feb 9 19:45:52.193369 kubelet[2166]: I0209 19:45:52.193287 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76g25\" (UniqueName: \"kubernetes.io/projected/3cb54617-8097-4780-bd56-028074a8ae3c-kube-api-access-76g25\") pod \"calico-typha-579498f6f9-6b85d\" (UID: \"3cb54617-8097-4780-bd56-028074a8ae3c\") " pod="calico-system/calico-typha-579498f6f9-6b85d" Feb 9 19:45:52.193369 kubelet[2166]: I0209 19:45:52.193327 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3cb54617-8097-4780-bd56-028074a8ae3c-typha-certs\") pod \"calico-typha-579498f6f9-6b85d\" (UID: \"3cb54617-8097-4780-bd56-028074a8ae3c\") " pod="calico-system/calico-typha-579498f6f9-6b85d" Feb 9 19:45:52.185000 audit[3183]: NETFILTER_CFG table=nat:110 family=2 entries=20 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:52.185000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffea45afd00 a2=0 a3=7ffea45afcec items=0 ppid=2345 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:52.200150 kernel: audit: type=1325 audit(1707507952.185:281): table=nat:110 family=2 entries=20 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:52.200287 kernel: audit: type=1300 audit(1707507952.185:281): arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7ffea45afd00 a2=0 a3=7ffea45afcec items=0 ppid=2345 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:52.185000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:52.201798 kernel: audit: type=1327 audit(1707507952.185:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:52.221000 audit[3209]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:52.221000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fffa14fbed0 a2=0 a3=7fffa14fbebc items=0 ppid=2345 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:52.227710 kernel: audit: type=1325 audit(1707507952.221:282): table=filter:111 family=2 entries=14 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:52.227775 kernel: audit: type=1300 audit(1707507952.221:282): arch=c000003e syscall=46 success=yes exit=4732 a0=3 a1=7fffa14fbed0 a2=0 a3=7fffa14fbebc items=0 ppid=2345 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:52.227803 kernel: audit: type=1327 audit(1707507952.221:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:52.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:52.222000 audit[3209]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:52.222000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fffa14fbed0 a2=0 a3=7fffa14fbebc items=0 ppid=2345 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:52.222000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:52.232415 kernel: audit: type=1325 audit(1707507952.222:283): table=nat:112 family=2 entries=20 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:52.456535 kubelet[2166]: E0209 19:45:52.456489 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:52.457153 env[1210]: time="2024-02-09T19:45:52.457103953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-579498f6f9-6b85d,Uid:3cb54617-8097-4780-bd56-028074a8ae3c,Namespace:calico-system,Attempt:0,}" Feb 9 19:45:52.469985 env[1210]: time="2024-02-09T19:45:52.469923010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:45:52.469985 env[1210]: time="2024-02-09T19:45:52.469965670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:45:52.469985 env[1210]: time="2024-02-09T19:45:52.469976721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:45:52.470185 env[1210]: time="2024-02-09T19:45:52.470142202Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0dfdadbcf8bbdf4b4314258542c48b19887c7041fd59ca9ca90f49f384422eb2 pid=3219 runtime=io.containerd.runc.v2 Feb 9 19:45:52.516098 env[1210]: time="2024-02-09T19:45:52.516040765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-579498f6f9-6b85d,Uid:3cb54617-8097-4780-bd56-028074a8ae3c,Namespace:calico-system,Attempt:0,} returns sandbox id \"0dfdadbcf8bbdf4b4314258542c48b19887c7041fd59ca9ca90f49f384422eb2\"" Feb 9 19:45:52.516691 kubelet[2166]: E0209 19:45:52.516665 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:52.524182 env[1210]: time="2024-02-09T19:45:52.524137722Z" level=info msg="CreateContainer within sandbox \"0dfdadbcf8bbdf4b4314258542c48b19887c7041fd59ca9ca90f49f384422eb2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 19:45:52.548461 env[1210]: time="2024-02-09T19:45:52.548422640Z" level=info msg="CreateContainer within sandbox \"0dfdadbcf8bbdf4b4314258542c48b19887c7041fd59ca9ca90f49f384422eb2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"15964681b73204216f07e8e4992e01a335a30689a989b2bfc42ed9ace8bd5387\"" Feb 9 19:45:52.548854 env[1210]: time="2024-02-09T19:45:52.548825128Z" level=info msg="StartContainer for \"15964681b73204216f07e8e4992e01a335a30689a989b2bfc42ed9ace8bd5387\"" Feb 9 19:45:52.615214 env[1210]: time="2024-02-09T19:45:52.615148711Z" level=info msg="StartContainer for \"15964681b73204216f07e8e4992e01a335a30689a989b2bfc42ed9ace8bd5387\" returns successfully" Feb 9 19:45:53.029274 env[1210]: time="2024-02-09T19:45:53.029169246Z" level=info msg="StopContainer for \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\" with timeout 1 (s)" Feb 9 19:45:53.029274 env[1210]: time="2024-02-09T19:45:53.029221926Z" level=error msg="StopContainer for \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\": not found" Feb 9 19:45:53.029905 kubelet[2166]: E0209 19:45:53.029852 2166 remote_runtime.go:349] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04\": not found" containerID="8dd7a9abdd0714914c1d8fa09b7ac301c65fa5ac8c16b7a70c7818afffa99f04" Feb 9 19:45:53.030688 env[1210]: time="2024-02-09T19:45:53.030596099Z" level=info msg="StopPodSandbox for \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\"" Feb 9 19:45:53.030824 env[1210]: time="2024-02-09T19:45:53.030745250Z" level=info msg="TearDown network for sandbox \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\" successfully" Feb 9 19:45:53.030824 env[1210]: time="2024-02-09T19:45:53.030818477Z" level=info msg="StopPodSandbox for \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\" returns successfully" Feb 9 19:45:53.031014 kubelet[2166]: I0209 19:45:53.030996 2166 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0adbd313-a370-496b-9907-dd54c7fab656 path="/var/lib/kubelet/pods/0adbd313-a370-496b-9907-dd54c7fab656/volumes" Feb 9 19:45:53.140140 kubelet[2166]: E0209 19:45:53.139905 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:53.150002 kubelet[2166]: I0209 19:45:53.149971 2166 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-579498f6f9-6b85d" podStartSLOduration=13.149929417 pod.CreationTimestamp="2024-02-09 19:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:45:53.149790095 +0000 UTC m=+32.242948401" watchObservedRunningTime="2024-02-09 19:45:53.149929417 +0000 UTC m=+32.243087723" Feb 9 19:45:53.197000 audit[3318]: NETFILTER_CFG table=filter:113 family=2 entries=13 op=nft_register_rule pid=3318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:53.197000 audit[3318]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fff9482a2c0 a2=0 a3=7fff9482a2ac items=0 ppid=2345 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:53.197000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:53.198000 audit[3318]: NETFILTER_CFG table=nat:114 family=2 entries=27 op=nft_register_chain pid=3318 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:45:53.198000 audit[3318]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7fff9482a2c0 a2=0 a3=7fff9482a2ac items=0 ppid=2345 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:53.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:45:54.028092 kubelet[2166]: E0209 19:45:54.028049 2166 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngqgb" podUID=991e9420-1ee3-42d4-b3be-2ddc6b5f52db Feb 9 19:45:54.142712 kubelet[2166]: E0209 19:45:54.142682 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:55.144466 kubelet[2166]: E0209 19:45:55.144427 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:56.028703 kubelet[2166]: E0209 19:45:56.028655 2166 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ngqgb" podUID=991e9420-1ee3-42d4-b3be-2ddc6b5f52db Feb 9 19:45:56.333726 env[1210]: time="2024-02-09T19:45:56.333624518Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:56.335549 env[1210]: time="2024-02-09T19:45:56.335515692Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:56.337055 env[1210]: time="2024-02-09T19:45:56.337029506Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:56.338548 env[1210]: time="2024-02-09T19:45:56.338508606Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:45:56.339107 env[1210]: time="2024-02-09T19:45:56.339076834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:8e8d96a874c0e2f137bc6e0ff4b9da4ac2341852e41d99ab81983d329bb87d93\"" Feb 9 19:45:56.340685 env[1210]: time="2024-02-09T19:45:56.340659638Z" level=info msg="CreateContainer within sandbox \"8a34bb1a089f224baa630bb3b40bb97ddd1e11e9b3c6e083f5aa48e2f32b1b96\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 19:45:56.357342 env[1210]: time="2024-02-09T19:45:56.357298748Z" level=info msg="CreateContainer within sandbox \"8a34bb1a089f224baa630bb3b40bb97ddd1e11e9b3c6e083f5aa48e2f32b1b96\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e3d7e8c3d9ff421af11c2fb40d867930d702ee25327a84e73a051be8d395b665\"" Feb 9 19:45:56.357886 env[1210]: time="2024-02-09T19:45:56.357833092Z" level=info msg="StartContainer for \"e3d7e8c3d9ff421af11c2fb40d867930d702ee25327a84e73a051be8d395b665\"" Feb 9 19:45:56.409115 env[1210]: time="2024-02-09T19:45:56.409068547Z" level=info msg="StartContainer for \"e3d7e8c3d9ff421af11c2fb40d867930d702ee25327a84e73a051be8d395b665\" returns successfully" Feb 9 19:45:56.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.60:22-10.0.0.1:37718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:56.502643 systemd[1]: Started sshd@7-10.0.0.60:22-10.0.0.1:37718.service. Feb 9 19:45:56.903000 audit[3355]: USER_ACCT pid=3355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:56.904672 sshd[3355]: Accepted publickey for core from 10.0.0.1 port 37718 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:45:56.904000 audit[3355]: CRED_ACQ pid=3355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:56.904000 audit[3355]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe95015b40 a2=3 a3=0 items=0 ppid=1 pid=3355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:45:56.904000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:45:56.906048 sshd[3355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:45:56.909861 systemd-logind[1192]: New session 8 of user core. Feb 9 19:45:56.910158 systemd[1]: Started session-8.scope. Feb 9 19:45:56.913000 audit[3355]: USER_START pid=3355 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:56.915000 audit[3358]: CRED_ACQ pid=3358 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:57.148766 kubelet[2166]: E0209 19:45:57.148738 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:57.400652 sshd[3355]: pam_unix(sshd:session): session closed for user core Feb 9 19:45:57.400000 audit[3355]: USER_END pid=3355 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:57.402795 systemd[1]: sshd@7-10.0.0.60:22-10.0.0.1:37718.service: Deactivated successfully. Feb 9 19:45:57.403763 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:45:57.403797 systemd-logind[1192]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:45:57.406026 kernel: kauditd_printk_skb: 16 callbacks suppressed Feb 9 19:45:57.406123 kernel: audit: type=1106 audit(1707507957.400:292): pid=3355 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:57.406149 kernel: audit: type=1104 audit(1707507957.400:293): pid=3355 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:57.400000 audit[3355]: CRED_DISP pid=3355 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:45:57.406447 systemd-logind[1192]: Removed session 8. Feb 9 19:45:57.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.60:22-10.0.0.1:37718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:57.417681 kernel: audit: type=1131 audit(1707507957.401:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.60:22-10.0.0.1:37718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:45:57.734630 env[1210]: time="2024-02-09T19:45:57.734456208Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:45:57.751225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3d7e8c3d9ff421af11c2fb40d867930d702ee25327a84e73a051be8d395b665-rootfs.mount: Deactivated successfully. Feb 9 19:45:57.753296 env[1210]: time="2024-02-09T19:45:57.753249381Z" level=info msg="shim disconnected" id=e3d7e8c3d9ff421af11c2fb40d867930d702ee25327a84e73a051be8d395b665 Feb 9 19:45:57.753296 env[1210]: time="2024-02-09T19:45:57.753291460Z" level=warning msg="cleaning up after shim disconnected" id=e3d7e8c3d9ff421af11c2fb40d867930d702ee25327a84e73a051be8d395b665 namespace=k8s.io Feb 9 19:45:57.753296 env[1210]: time="2024-02-09T19:45:57.753299725Z" level=info msg="cleaning up dead shim" Feb 9 19:45:57.759699 env[1210]: time="2024-02-09T19:45:57.759657060Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:45:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3395 runtime=io.containerd.runc.v2\n" Feb 9 19:45:57.777462 kubelet[2166]: I0209 19:45:57.777437 2166 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:45:57.796727 kubelet[2166]: I0209 19:45:57.795031 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:57.797364 kubelet[2166]: I0209 19:45:57.797339 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:57.797595 kubelet[2166]: I0209 19:45:57.797573 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:45:57.830962 kubelet[2166]: I0209 19:45:57.830906 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwdpr\" (UniqueName: \"kubernetes.io/projected/5d3b013a-7876-459c-882d-e9f8c04bb711-kube-api-access-kwdpr\") pod \"coredns-787d4945fb-rrzs2\" (UID: \"5d3b013a-7876-459c-882d-e9f8c04bb711\") " pod="kube-system/coredns-787d4945fb-rrzs2" Feb 9 19:45:57.830962 kubelet[2166]: I0209 19:45:57.830965 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6677b\" (UniqueName: \"kubernetes.io/projected/109b6767-1ceb-4335-86d1-e08458d36dc3-kube-api-access-6677b\") pod \"coredns-787d4945fb-mlq8f\" (UID: \"109b6767-1ceb-4335-86d1-e08458d36dc3\") " pod="kube-system/coredns-787d4945fb-mlq8f" Feb 9 19:45:57.831150 kubelet[2166]: I0209 19:45:57.830997 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d3b013a-7876-459c-882d-e9f8c04bb711-config-volume\") pod \"coredns-787d4945fb-rrzs2\" (UID: \"5d3b013a-7876-459c-882d-e9f8c04bb711\") " pod="kube-system/coredns-787d4945fb-rrzs2" Feb 9 19:45:57.831150 kubelet[2166]: I0209 19:45:57.831136 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/109b6767-1ceb-4335-86d1-e08458d36dc3-config-volume\") pod \"coredns-787d4945fb-mlq8f\" (UID: \"109b6767-1ceb-4335-86d1-e08458d36dc3\") " pod="kube-system/coredns-787d4945fb-mlq8f" Feb 9 19:45:57.831226 kubelet[2166]: I0209 19:45:57.831213 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eab19472-8749-4447-8ba1-c4bcd69e7ff4-tigera-ca-bundle\") pod \"calico-kube-controllers-54744cf5f8-xqcbp\" (UID: \"eab19472-8749-4447-8ba1-c4bcd69e7ff4\") " pod="calico-system/calico-kube-controllers-54744cf5f8-xqcbp" Feb 9 19:45:57.831257 kubelet[2166]: I0209 19:45:57.831246 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5frmr\" (UniqueName: \"kubernetes.io/projected/eab19472-8749-4447-8ba1-c4bcd69e7ff4-kube-api-access-5frmr\") pod \"calico-kube-controllers-54744cf5f8-xqcbp\" (UID: \"eab19472-8749-4447-8ba1-c4bcd69e7ff4\") " pod="calico-system/calico-kube-controllers-54744cf5f8-xqcbp" Feb 9 19:45:58.030561 env[1210]: time="2024-02-09T19:45:58.030453580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngqgb,Uid:991e9420-1ee3-42d4-b3be-2ddc6b5f52db,Namespace:calico-system,Attempt:0,}" Feb 9 19:45:58.080209 env[1210]: time="2024-02-09T19:45:58.080134999Z" level=error msg="Failed to destroy network for sandbox \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.080505 env[1210]: time="2024-02-09T19:45:58.080475058Z" level=error msg="encountered an error cleaning up failed sandbox \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.080542 env[1210]: time="2024-02-09T19:45:58.080517277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngqgb,Uid:991e9420-1ee3-42d4-b3be-2ddc6b5f52db,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.080778 kubelet[2166]: E0209 19:45:58.080745 2166 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.080917 kubelet[2166]: E0209 19:45:58.080814 2166 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngqgb" Feb 9 19:45:58.080917 kubelet[2166]: E0209 19:45:58.080839 2166 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ngqgb" Feb 9 19:45:58.080917 kubelet[2166]: E0209 19:45:58.080909 2166 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ngqgb_calico-system(991e9420-1ee3-42d4-b3be-2ddc6b5f52db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ngqgb_calico-system(991e9420-1ee3-42d4-b3be-2ddc6b5f52db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ngqgb" podUID=991e9420-1ee3-42d4-b3be-2ddc6b5f52db Feb 9 19:45:58.099149 kubelet[2166]: E0209 19:45:58.099130 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:58.099528 env[1210]: time="2024-02-09T19:45:58.099505141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-mlq8f,Uid:109b6767-1ceb-4335-86d1-e08458d36dc3,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:58.101246 env[1210]: time="2024-02-09T19:45:58.101209112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54744cf5f8-xqcbp,Uid:eab19472-8749-4447-8ba1-c4bcd69e7ff4,Namespace:calico-system,Attempt:0,}" Feb 9 19:45:58.105240 kubelet[2166]: E0209 19:45:58.105212 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:58.106005 env[1210]: time="2024-02-09T19:45:58.105973583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-rrzs2,Uid:5d3b013a-7876-459c-882d-e9f8c04bb711,Namespace:kube-system,Attempt:0,}" Feb 9 19:45:58.150322 kubelet[2166]: I0209 19:45:58.150299 2166 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:45:58.151189 env[1210]: time="2024-02-09T19:45:58.151142855Z" level=info msg="StopPodSandbox for \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\"" Feb 9 19:45:58.153813 kubelet[2166]: E0209 19:45:58.153798 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:45:58.154297 env[1210]: time="2024-02-09T19:45:58.154266753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 19:45:58.182418 env[1210]: time="2024-02-09T19:45:58.182342802Z" level=error msg="Failed to destroy network for sandbox \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.182712 env[1210]: time="2024-02-09T19:45:58.182684484Z" level=error msg="encountered an error cleaning up failed sandbox \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.182771 env[1210]: time="2024-02-09T19:45:58.182728696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-mlq8f,Uid:109b6767-1ceb-4335-86d1-e08458d36dc3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.182964 kubelet[2166]: E0209 19:45:58.182940 2166 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.183029 kubelet[2166]: E0209 19:45:58.182995 2166 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-mlq8f" Feb 9 19:45:58.183029 kubelet[2166]: E0209 19:45:58.183014 2166 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-mlq8f" Feb 9 19:45:58.183087 kubelet[2166]: E0209 19:45:58.183058 2166 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-mlq8f_kube-system(109b6767-1ceb-4335-86d1-e08458d36dc3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-mlq8f_kube-system(109b6767-1ceb-4335-86d1-e08458d36dc3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-mlq8f" podUID=109b6767-1ceb-4335-86d1-e08458d36dc3 Feb 9 19:45:58.183828 env[1210]: time="2024-02-09T19:45:58.183784119Z" level=error msg="StopPodSandbox for \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\" failed" error="failed to destroy network for sandbox \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.183923 kubelet[2166]: E0209 19:45:58.183909 2166 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:45:58.183980 kubelet[2166]: E0209 19:45:58.183936 2166 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5} Feb 9 19:45:58.183980 kubelet[2166]: E0209 19:45:58.183964 2166 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"991e9420-1ee3-42d4-b3be-2ddc6b5f52db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:45:58.184060 kubelet[2166]: E0209 19:45:58.183984 2166 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"991e9420-1ee3-42d4-b3be-2ddc6b5f52db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ngqgb" podUID=991e9420-1ee3-42d4-b3be-2ddc6b5f52db Feb 9 19:45:58.190024 env[1210]: time="2024-02-09T19:45:58.189974480Z" level=error msg="Failed to destroy network for sandbox \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.190361 env[1210]: time="2024-02-09T19:45:58.190322955Z" level=error msg="encountered an error cleaning up failed sandbox \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.190448 env[1210]: time="2024-02-09T19:45:58.190373539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-rrzs2,Uid:5d3b013a-7876-459c-882d-e9f8c04bb711,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.190564 kubelet[2166]: E0209 19:45:58.190544 2166 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.190614 kubelet[2166]: E0209 19:45:58.190589 2166 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-rrzs2" Feb 9 19:45:58.190647 kubelet[2166]: E0209 19:45:58.190617 2166 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-rrzs2" Feb 9 19:45:58.190694 kubelet[2166]: E0209 19:45:58.190679 2166 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-rrzs2_kube-system(5d3b013a-7876-459c-882d-e9f8c04bb711)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-rrzs2_kube-system(5d3b013a-7876-459c-882d-e9f8c04bb711)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-rrzs2" podUID=5d3b013a-7876-459c-882d-e9f8c04bb711 Feb 9 19:45:58.193313 env[1210]: time="2024-02-09T19:45:58.193270632Z" level=error msg="Failed to destroy network for sandbox \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.193600 env[1210]: time="2024-02-09T19:45:58.193573171Z" level=error msg="encountered an error cleaning up failed sandbox \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.193633 env[1210]: time="2024-02-09T19:45:58.193614308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54744cf5f8-xqcbp,Uid:eab19472-8749-4447-8ba1-c4bcd69e7ff4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.193824 kubelet[2166]: E0209 19:45:58.193805 2166 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:58.193863 kubelet[2166]: E0209 19:45:58.193859 2166 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54744cf5f8-xqcbp" Feb 9 19:45:58.193890 kubelet[2166]: E0209 19:45:58.193878 2166 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54744cf5f8-xqcbp" Feb 9 19:45:58.193934 kubelet[2166]: E0209 19:45:58.193927 2166 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54744cf5f8-xqcbp_calico-system(eab19472-8749-4447-8ba1-c4bcd69e7ff4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54744cf5f8-xqcbp_calico-system(eab19472-8749-4447-8ba1-c4bcd69e7ff4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54744cf5f8-xqcbp" podUID=eab19472-8749-4447-8ba1-c4bcd69e7ff4 Feb 9 19:45:58.753034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5-shm.mount: Deactivated successfully. Feb 9 19:45:59.156100 kubelet[2166]: I0209 19:45:59.156065 2166 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:45:59.156622 env[1210]: time="2024-02-09T19:45:59.156575475Z" level=info msg="StopPodSandbox for \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\"" Feb 9 19:45:59.157217 kubelet[2166]: I0209 19:45:59.157179 2166 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:45:59.157795 env[1210]: time="2024-02-09T19:45:59.157765080Z" level=info msg="StopPodSandbox for \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\"" Feb 9 19:45:59.158239 kubelet[2166]: I0209 19:45:59.158219 2166 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:45:59.159048 env[1210]: time="2024-02-09T19:45:59.159015939Z" level=info msg="StopPodSandbox for \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\"" Feb 9 19:45:59.183316 env[1210]: time="2024-02-09T19:45:59.183248831Z" level=error msg="StopPodSandbox for \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\" failed" error="failed to destroy network for sandbox \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:59.183598 kubelet[2166]: E0209 19:45:59.183560 2166 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:45:59.183667 kubelet[2166]: E0209 19:45:59.183629 2166 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b} Feb 9 19:45:59.183693 kubelet[2166]: E0209 19:45:59.183679 2166 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d3b013a-7876-459c-882d-e9f8c04bb711\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:45:59.183762 kubelet[2166]: E0209 19:45:59.183718 2166 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d3b013a-7876-459c-882d-e9f8c04bb711\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-rrzs2" podUID=5d3b013a-7876-459c-882d-e9f8c04bb711 Feb 9 19:45:59.184952 env[1210]: time="2024-02-09T19:45:59.184913849Z" level=error msg="StopPodSandbox for \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\" failed" error="failed to destroy network for sandbox \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:59.185082 kubelet[2166]: E0209 19:45:59.185064 2166 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:45:59.185128 kubelet[2166]: E0209 19:45:59.185091 2166 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984} Feb 9 19:45:59.185158 kubelet[2166]: E0209 19:45:59.185129 2166 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"109b6767-1ceb-4335-86d1-e08458d36dc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:45:59.185208 kubelet[2166]: E0209 19:45:59.185169 2166 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"109b6767-1ceb-4335-86d1-e08458d36dc3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-mlq8f" podUID=109b6767-1ceb-4335-86d1-e08458d36dc3 Feb 9 19:45:59.191159 env[1210]: time="2024-02-09T19:45:59.191101432Z" level=error msg="StopPodSandbox for \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\" failed" error="failed to destroy network for sandbox \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 19:45:59.191404 kubelet[2166]: E0209 19:45:59.191369 2166 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:45:59.191404 kubelet[2166]: E0209 19:45:59.191405 2166 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf} Feb 9 19:45:59.191478 kubelet[2166]: E0209 19:45:59.191442 2166 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eab19472-8749-4447-8ba1-c4bcd69e7ff4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 19:45:59.191478 kubelet[2166]: E0209 19:45:59.191476 2166 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eab19472-8749-4447-8ba1-c4bcd69e7ff4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54744cf5f8-xqcbp" podUID=eab19472-8749-4447-8ba1-c4bcd69e7ff4 Feb 9 19:46:02.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.60:22-10.0.0.1:44056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:02.403847 systemd[1]: Started sshd@8-10.0.0.60:22-10.0.0.1:44056.service. Feb 9 19:46:02.408412 kernel: audit: type=1130 audit(1707507962.402:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.60:22-10.0.0.1:44056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:02.443000 audit[3656]: USER_ACCT pid=3656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:02.446000 audit[3656]: CRED_ACQ pid=3656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:02.447967 sshd[3656]: Accepted publickey for core from 10.0.0.1 port 44056 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:02.448241 sshd[3656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:02.448498 kernel: audit: type=1101 audit(1707507962.443:296): pid=3656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:02.448537 kernel: audit: type=1103 audit(1707507962.446:297): pid=3656 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:02.446000 audit[3656]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffead0c0090 a2=3 a3=0 items=0 ppid=1 pid=3656 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:02.454986 kernel: audit: type=1006 audit(1707507962.446:298): pid=3656 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Feb 9 19:46:02.455035 kernel: audit: type=1300 audit(1707507962.446:298): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffead0c0090 a2=3 a3=0 items=0 ppid=1 pid=3656 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:02.457574 kernel: audit: type=1327 audit(1707507962.446:298): proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:02.446000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:02.457314 systemd[1]: Started session-9.scope. Feb 9 19:46:02.457916 systemd-logind[1192]: New session 9 of user core. Feb 9 19:46:02.461000 audit[3656]: USER_START pid=3656 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:02.464000 audit[3659]: CRED_ACQ pid=3659 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:02.468333 kernel: audit: type=1105 audit(1707507962.461:299): pid=3656 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:02.468384 kernel: audit: type=1103 audit(1707507962.464:300): pid=3659 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:02.561683 sshd[3656]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:02.569053 kernel: audit: type=1106 audit(1707507962.561:301): pid=3656 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:02.569147 kernel: audit: type=1104 audit(1707507962.561:302): pid=3656 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:02.561000 audit[3656]: USER_END pid=3656 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:02.561000 audit[3656]: CRED_DISP pid=3656 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:02.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.60:22-10.0.0.1:44056 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:02.563600 systemd[1]: sshd@8-10.0.0.60:22-10.0.0.1:44056.service: Deactivated successfully. Feb 9 19:46:02.564529 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:46:02.570074 systemd-logind[1192]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:46:02.570721 systemd-logind[1192]: Removed session 9. Feb 9 19:46:05.014955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2089173590.mount: Deactivated successfully. Feb 9 19:46:06.071406 env[1210]: time="2024-02-09T19:46:06.071339143Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:06.072944 env[1210]: time="2024-02-09T19:46:06.072894823Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:06.074253 env[1210]: time="2024-02-09T19:46:06.074221464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:06.075467 env[1210]: time="2024-02-09T19:46:06.075446434Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:06.075853 env[1210]: time="2024-02-09T19:46:06.075826948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:1843802b91be8ff1c1d35ee08461ebe909e7a2199e59396f69886439a372312c\"" Feb 9 19:46:06.086948 env[1210]: time="2024-02-09T19:46:06.086890497Z" level=info msg="CreateContainer within sandbox \"8a34bb1a089f224baa630bb3b40bb97ddd1e11e9b3c6e083f5aa48e2f32b1b96\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 19:46:06.100050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount218172852.mount: Deactivated successfully. Feb 9 19:46:06.106868 env[1210]: time="2024-02-09T19:46:06.106812896Z" level=info msg="CreateContainer within sandbox \"8a34bb1a089f224baa630bb3b40bb97ddd1e11e9b3c6e083f5aa48e2f32b1b96\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9f57b655b696e36b15521f4473bab03a76bd74d32030c199147d18cf9f1cae8d\"" Feb 9 19:46:06.107596 env[1210]: time="2024-02-09T19:46:06.107562794Z" level=info msg="StartContainer for \"9f57b655b696e36b15521f4473bab03a76bd74d32030c199147d18cf9f1cae8d\"" Feb 9 19:46:06.152917 env[1210]: time="2024-02-09T19:46:06.151555448Z" level=info msg="StartContainer for \"9f57b655b696e36b15521f4473bab03a76bd74d32030c199147d18cf9f1cae8d\" returns successfully" Feb 9 19:46:06.172999 kubelet[2166]: E0209 19:46:06.172975 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:06.189138 kubelet[2166]: I0209 19:46:06.189108 2166 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-m6hf5" podStartSLOduration=-9.223372017665705e+09 pod.CreationTimestamp="2024-02-09 19:45:47 +0000 UTC" firstStartedPulling="2024-02-09 19:45:48.086423409 +0000 UTC m=+27.179581715" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:06.188304285 +0000 UTC m=+45.281462601" watchObservedRunningTime="2024-02-09 19:46:06.189071605 +0000 UTC m=+45.282229911" Feb 9 19:46:06.221988 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 19:46:06.222088 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 19:46:07.175434 kubelet[2166]: E0209 19:46:07.175327 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:07.448000 audit[3831]: AVC avc: denied { write } for pid=3831 comm="tee" name="fd" dev="proc" ino=24362 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:07.459730 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:46:07.459852 kernel: audit: type=1400 audit(1707507967.448:304): avc: denied { write } for pid=3831 comm="tee" name="fd" dev="proc" ino=24362 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:07.459877 kernel: audit: type=1300 audit(1707507967.448:304): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff24bff97f a2=241 a3=1b6 items=1 ppid=3799 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.448000 audit[3831]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff24bff97f a2=241 a3=1b6 items=1 ppid=3799 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.448000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:46:07.448000 audit: PATH item=0 name="/dev/fd/63" inode=24356 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:07.473275 kernel: audit: type=1307 audit(1707507967.448:304): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 19:46:07.473335 kernel: audit: type=1302 audit(1707507967.448:304): item=0 name="/dev/fd/63" inode=24356 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:07.475164 kernel: audit: type=1327 audit(1707507967.448:304): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:07.448000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:07.448000 audit[3837]: AVC avc: denied { write } for pid=3837 comm="tee" name="fd" dev="proc" ino=26943 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:07.480209 kernel: audit: type=1400 audit(1707507967.448:305): avc: denied { write } for pid=3837 comm="tee" name="fd" dev="proc" ino=26943 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:07.448000 audit[3837]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeb5d7f990 a2=241 a3=1b6 items=1 ppid=3806 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.483465 kernel: audit: type=1300 audit(1707507967.448:305): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeb5d7f990 a2=241 a3=1b6 items=1 ppid=3806 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.448000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 19:46:07.484441 kernel: audit: type=1307 audit(1707507967.448:305): cwd="/etc/service/enabled/bird/log" Feb 9 19:46:07.448000 audit: PATH item=0 name="/dev/fd/63" inode=24359 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:07.488460 kernel: audit: type=1302 audit(1707507967.448:305): item=0 name="/dev/fd/63" inode=24359 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:07.488504 kernel: audit: type=1327 audit(1707507967.448:305): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:07.448000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:07.467000 audit[3822]: AVC avc: denied { write } for pid=3822 comm="tee" name="fd" dev="proc" ino=26209 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:07.467000 audit[3822]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe789bd991 a2=241 a3=1b6 items=1 ppid=3795 pid=3822 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.467000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 19:46:07.467000 audit: PATH item=0 name="/dev/fd/63" inode=25304 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:07.467000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:07.468000 audit[3851]: AVC avc: denied { write } for pid=3851 comm="tee" name="fd" dev="proc" ino=25329 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:07.468000 audit[3851]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff334d098f a2=241 a3=1b6 items=1 ppid=3801 pid=3851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.468000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 19:46:07.468000 audit: PATH item=0 name="/dev/fd/63" inode=25318 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:07.468000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:07.474000 audit[3839]: AVC avc: denied { write } for pid=3839 comm="tee" name="fd" dev="proc" ino=26956 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:07.474000 audit[3839]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff3477298f a2=241 a3=1b6 items=1 ppid=3807 pid=3839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.474000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 19:46:07.474000 audit: PATH item=0 name="/dev/fd/63" inode=25311 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:07.474000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:07.479000 audit[3870]: AVC avc: denied { write } for pid=3870 comm="tee" name="fd" dev="proc" ino=25336 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:07.479000 audit[3870]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd3d52c98f a2=241 a3=1b6 items=1 ppid=3794 pid=3870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.479000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 19:46:07.479000 audit: PATH item=0 name="/dev/fd/63" inode=25333 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:07.479000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:07.504000 audit[3878]: AVC avc: denied { write } for pid=3878 comm="tee" name="fd" dev="proc" ino=26214 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 19:46:07.504000 audit[3878]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffee36c6980 a2=241 a3=1b6 items=1 ppid=3796 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.504000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 19:46:07.504000 audit: PATH item=0 name="/dev/fd/63" inode=24372 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:46:07.504000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 19:46:07.564738 systemd[1]: Started sshd@9-10.0.0.60:22-10.0.0.1:44058.service. Feb 9 19:46:07.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.60:22-10.0.0.1:44058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:07.606807 sshd[3903]: Accepted publickey for core from 10.0.0.1 port 44058 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:07.605000 audit[3903]: USER_ACCT pid=3903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:07.606000 audit[3903]: CRED_ACQ pid=3903 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:07.606000 audit[3903]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef939ff20 a2=3 a3=0 items=0 ppid=1 pid=3903 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.606000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:07.608295 sshd[3903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:07.612207 systemd[1]: Started session-10.scope. Feb 9 19:46:07.612591 systemd-logind[1192]: New session 10 of user core. Feb 9 19:46:07.616000 audit[3903]: USER_START pid=3903 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:07.617000 audit[3931]: CRED_ACQ pid=3931 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit: BPF prog-id=10 op=LOAD Feb 9 19:46:07.651000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf94894d0 a2=70 a3=7f8239793000 items=0 ppid=3797 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.651000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:07.651000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit: BPF prog-id=11 op=LOAD Feb 9 19:46:07.651000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcf94894d0 a2=70 a3=6e items=0 ppid=3797 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.651000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:07.651000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffcf9489480 a2=70 a3=7ffcf94894d0 items=0 ppid=3797 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.651000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit: BPF prog-id=12 op=LOAD Feb 9 19:46:07.651000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcf9489460 a2=70 a3=7ffcf94894d0 items=0 ppid=3797 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.651000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:07.651000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf9489540 a2=70 a3=0 items=0 ppid=3797 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.651000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffcf9489530 a2=70 a3=0 items=0 ppid=3797 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.651000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:07.651000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.651000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffcf9489570 a2=70 a3=0 items=0 ppid=3797 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.651000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:07.652000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.652000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.652000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.652000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.652000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.652000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.652000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.652000 audit[3945]: AVC avc: denied { perfmon } for pid=3945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.652000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.652000 audit[3945]: AVC avc: denied { bpf } for pid=3945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.652000 audit: BPF prog-id=13 op=LOAD Feb 9 19:46:07.652000 audit[3945]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcf9489490 a2=70 a3=ffffffff items=0 ppid=3797 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.652000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 19:46:07.655000 audit[3949]: AVC avc: denied { bpf } for pid=3949 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.655000 audit[3949]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd267f99c0 a2=70 a3=208 items=0 ppid=3797 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.655000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:46:07.655000 audit[3949]: AVC avc: denied { bpf } for pid=3949 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 19:46:07.655000 audit[3949]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd267f9890 a2=70 a3=3 items=0 ppid=3797 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.655000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 19:46:07.665000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:46:07.693000 audit[3985]: NETFILTER_CFG table=mangle:115 family=2 entries=19 op=nft_register_chain pid=3985 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:07.693000 audit[3985]: SYSCALL arch=c000003e syscall=46 success=yes exit=6800 a0=3 a1=7ffe1a2b2960 a2=0 a3=7ffe1a2b294c items=0 ppid=3797 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.693000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:07.697000 audit[3984]: NETFILTER_CFG table=raw:116 family=2 entries=19 op=nft_register_chain pid=3984 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:07.697000 audit[3984]: SYSCALL arch=c000003e syscall=46 success=yes exit=6132 a0=3 a1=7fff6083ff60 a2=0 a3=7fff6083ff4c items=0 ppid=3797 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.697000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:07.701000 audit[3986]: NETFILTER_CFG table=nat:117 family=2 entries=16 op=nft_register_chain pid=3986 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:07.701000 audit[3986]: SYSCALL arch=c000003e syscall=46 success=yes exit=5188 a0=3 a1=7fff76c84a20 a2=0 a3=559217a04000 items=0 ppid=3797 pid=3986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.701000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:07.703000 audit[3987]: NETFILTER_CFG table=filter:118 family=2 entries=39 op=nft_register_chain pid=3987 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:07.703000 audit[3987]: SYSCALL arch=c000003e syscall=46 success=yes exit=18472 a0=3 a1=7ffe547fdc20 a2=0 a3=562e53ba7000 items=0 ppid=3797 pid=3987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:07.703000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:07.723462 sshd[3903]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:07.722000 audit[3903]: USER_END pid=3903 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:07.722000 audit[3903]: CRED_DISP pid=3903 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:07.725795 systemd[1]: sshd@9-10.0.0.60:22-10.0.0.1:44058.service: Deactivated successfully. Feb 9 19:46:07.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.60:22-10.0.0.1:44058 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:07.726483 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:46:07.726806 systemd-logind[1192]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:46:07.727849 systemd-logind[1192]: Removed session 10. Feb 9 19:46:08.176964 kubelet[2166]: E0209 19:46:08.176936 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:08.585740 systemd-networkd[1085]: vxlan.calico: Link UP Feb 9 19:46:08.585750 systemd-networkd[1085]: vxlan.calico: Gained carrier Feb 9 19:46:09.732552 systemd-networkd[1085]: vxlan.calico: Gained IPv6LL Feb 9 19:46:10.029216 env[1210]: time="2024-02-09T19:46:10.029096025Z" level=info msg="StopPodSandbox for \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\"" Feb 9 19:46:10.128437 env[1210]: 2024-02-09 19:46:10.071 [INFO][4042] k8s.go 578: Cleaning up netns ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:46:10.128437 env[1210]: 2024-02-09 19:46:10.071 [INFO][4042] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" iface="eth0" netns="/var/run/netns/cni-0022ab59-37cc-ede2-0006-1275145b7ad3" Feb 9 19:46:10.128437 env[1210]: 2024-02-09 19:46:10.074 [INFO][4042] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" iface="eth0" netns="/var/run/netns/cni-0022ab59-37cc-ede2-0006-1275145b7ad3" Feb 9 19:46:10.128437 env[1210]: 2024-02-09 19:46:10.074 [INFO][4042] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" iface="eth0" netns="/var/run/netns/cni-0022ab59-37cc-ede2-0006-1275145b7ad3" Feb 9 19:46:10.128437 env[1210]: 2024-02-09 19:46:10.074 [INFO][4042] k8s.go 585: Releasing IP address(es) ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:46:10.128437 env[1210]: 2024-02-09 19:46:10.074 [INFO][4042] utils.go 188: Calico CNI releasing IP address ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:46:10.128437 env[1210]: 2024-02-09 19:46:10.116 [INFO][4049] ipam_plugin.go 415: Releasing address using handleID ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" HandleID="k8s-pod-network.8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Workload="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:10.128437 env[1210]: 2024-02-09 19:46:10.116 [INFO][4049] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:10.128437 env[1210]: 2024-02-09 19:46:10.116 [INFO][4049] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:10.128437 env[1210]: 2024-02-09 19:46:10.124 [WARNING][4049] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" HandleID="k8s-pod-network.8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Workload="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:10.128437 env[1210]: 2024-02-09 19:46:10.124 [INFO][4049] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" HandleID="k8s-pod-network.8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Workload="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:10.128437 env[1210]: 2024-02-09 19:46:10.125 [INFO][4049] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:10.128437 env[1210]: 2024-02-09 19:46:10.127 [INFO][4042] k8s.go 591: Teardown processing complete. ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:46:10.128923 env[1210]: time="2024-02-09T19:46:10.128588215Z" level=info msg="TearDown network for sandbox \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\" successfully" Feb 9 19:46:10.128923 env[1210]: time="2024-02-09T19:46:10.128634051Z" level=info msg="StopPodSandbox for \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\" returns successfully" Feb 9 19:46:10.129248 env[1210]: time="2024-02-09T19:46:10.129209361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngqgb,Uid:991e9420-1ee3-42d4-b3be-2ddc6b5f52db,Namespace:calico-system,Attempt:1,}" Feb 9 19:46:10.131108 systemd[1]: run-netns-cni\x2d0022ab59\x2d37cc\x2dede2\x2d0006\x2d1275145b7ad3.mount: Deactivated successfully. Feb 9 19:46:10.379888 systemd-networkd[1085]: cali599f826e24c: Link UP Feb 9 19:46:10.382174 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:46:10.382252 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali599f826e24c: link becomes ready Feb 9 19:46:10.382415 systemd-networkd[1085]: cali599f826e24c: Gained carrier Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.320 [INFO][4058] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ngqgb-eth0 csi-node-driver- calico-system 991e9420-1ee3-42d4-b3be-2ddc6b5f52db 862 0 2024-02-09 19:45:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-ngqgb eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali599f826e24c [] []}} ContainerID="046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" Namespace="calico-system" Pod="csi-node-driver-ngqgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ngqgb-" Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.320 [INFO][4058] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" Namespace="calico-system" Pod="csi-node-driver-ngqgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.345 [INFO][4071] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" HandleID="k8s-pod-network.046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" Workload="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.355 [INFO][4071] ipam_plugin.go 268: Auto assigning IP ContainerID="046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" HandleID="k8s-pod-network.046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" Workload="localhost-k8s-csi--node--driver--ngqgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000519f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ngqgb", "timestamp":"2024-02-09 19:46:10.345688754 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.356 [INFO][4071] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.356 [INFO][4071] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.356 [INFO][4071] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.358 [INFO][4071] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" host="localhost" Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.361 [INFO][4071] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.364 [INFO][4071] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.365 [INFO][4071] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.367 [INFO][4071] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.367 [INFO][4071] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" host="localhost" Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.368 [INFO][4071] ipam.go 1682: Creating new handle: k8s-pod-network.046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136 Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.370 [INFO][4071] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" host="localhost" Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.374 [INFO][4071] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" host="localhost" Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.374 [INFO][4071] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" host="localhost" Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.374 [INFO][4071] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:10.393501 env[1210]: 2024-02-09 19:46:10.374 [INFO][4071] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" HandleID="k8s-pod-network.046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" Workload="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:10.394218 env[1210]: 2024-02-09 19:46:10.377 [INFO][4058] k8s.go 385: Populated endpoint ContainerID="046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" Namespace="calico-system" Pod="csi-node-driver-ngqgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ngqgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ngqgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"991e9420-1ee3-42d4-b3be-2ddc6b5f52db", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ngqgb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali599f826e24c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:10.394218 env[1210]: 2024-02-09 19:46:10.377 [INFO][4058] k8s.go 386: Calico CNI using IPs: [192.168.88.129/32] ContainerID="046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" Namespace="calico-system" Pod="csi-node-driver-ngqgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:10.394218 env[1210]: 2024-02-09 19:46:10.377 [INFO][4058] dataplane_linux.go 68: Setting the host side veth name to cali599f826e24c ContainerID="046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" Namespace="calico-system" Pod="csi-node-driver-ngqgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:10.394218 env[1210]: 2024-02-09 19:46:10.380 [INFO][4058] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" Namespace="calico-system" Pod="csi-node-driver-ngqgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:10.394218 env[1210]: 2024-02-09 19:46:10.382 [INFO][4058] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" Namespace="calico-system" Pod="csi-node-driver-ngqgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ngqgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ngqgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"991e9420-1ee3-42d4-b3be-2ddc6b5f52db", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136", Pod:"csi-node-driver-ngqgb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali599f826e24c", MAC:"52:09:3e:b8:37:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:10.394218 env[1210]: 2024-02-09 19:46:10.390 [INFO][4058] k8s.go 491: Wrote updated endpoint to datastore ContainerID="046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136" Namespace="calico-system" Pod="csi-node-driver-ngqgb" WorkloadEndpoint="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:10.405851 env[1210]: time="2024-02-09T19:46:10.405786000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:10.405851 env[1210]: time="2024-02-09T19:46:10.405830613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:10.405851 env[1210]: time="2024-02-09T19:46:10.405841073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:10.406507 env[1210]: time="2024-02-09T19:46:10.406405933Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136 pid=4102 runtime=io.containerd.runc.v2 Feb 9 19:46:10.408000 audit[4113]: NETFILTER_CFG table=filter:119 family=2 entries=36 op=nft_register_chain pid=4113 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:10.408000 audit[4113]: SYSCALL arch=c000003e syscall=46 success=yes exit=19908 a0=3 a1=7fff22754fc0 a2=0 a3=7fff22754fac items=0 ppid=3797 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:10.408000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:10.437102 systemd-resolved[1137]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:46:10.447789 env[1210]: time="2024-02-09T19:46:10.447747701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ngqgb,Uid:991e9420-1ee3-42d4-b3be-2ddc6b5f52db,Namespace:calico-system,Attempt:1,} returns sandbox id \"046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136\"" Feb 9 19:46:10.449244 env[1210]: time="2024-02-09T19:46:10.449219322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 19:46:11.029318 env[1210]: time="2024-02-09T19:46:11.029264096Z" level=info msg="StopPodSandbox for \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\"" Feb 9 19:46:11.029832 env[1210]: time="2024-02-09T19:46:11.029793099Z" level=info msg="StopPodSandbox for \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\"" Feb 9 19:46:11.121947 env[1210]: 2024-02-09 19:46:11.077 [INFO][4170] k8s.go 578: Cleaning up netns ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:46:11.121947 env[1210]: 2024-02-09 19:46:11.078 [INFO][4170] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" iface="eth0" netns="/var/run/netns/cni-c3dee656-e36e-56ca-5948-c24309ee271c" Feb 9 19:46:11.121947 env[1210]: 2024-02-09 19:46:11.079 [INFO][4170] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" iface="eth0" netns="/var/run/netns/cni-c3dee656-e36e-56ca-5948-c24309ee271c" Feb 9 19:46:11.121947 env[1210]: 2024-02-09 19:46:11.079 [INFO][4170] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" iface="eth0" netns="/var/run/netns/cni-c3dee656-e36e-56ca-5948-c24309ee271c" Feb 9 19:46:11.121947 env[1210]: 2024-02-09 19:46:11.079 [INFO][4170] k8s.go 585: Releasing IP address(es) ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:46:11.121947 env[1210]: 2024-02-09 19:46:11.079 [INFO][4170] utils.go 188: Calico CNI releasing IP address ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:46:11.121947 env[1210]: 2024-02-09 19:46:11.110 [INFO][4183] ipam_plugin.go 415: Releasing address using handleID ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" HandleID="k8s-pod-network.f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Workload="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:11.121947 env[1210]: 2024-02-09 19:46:11.110 [INFO][4183] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:11.121947 env[1210]: 2024-02-09 19:46:11.110 [INFO][4183] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:11.121947 env[1210]: 2024-02-09 19:46:11.117 [WARNING][4183] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" HandleID="k8s-pod-network.f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Workload="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:11.121947 env[1210]: 2024-02-09 19:46:11.117 [INFO][4183] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" HandleID="k8s-pod-network.f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Workload="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:11.121947 env[1210]: 2024-02-09 19:46:11.119 [INFO][4183] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:11.121947 env[1210]: 2024-02-09 19:46:11.120 [INFO][4170] k8s.go 591: Teardown processing complete. ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:46:11.122904 env[1210]: time="2024-02-09T19:46:11.122118077Z" level=info msg="TearDown network for sandbox \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\" successfully" Feb 9 19:46:11.122904 env[1210]: time="2024-02-09T19:46:11.122169114Z" level=info msg="StopPodSandbox for \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\" returns successfully" Feb 9 19:46:11.123047 kubelet[2166]: E0209 19:46:11.122479 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:11.123716 env[1210]: time="2024-02-09T19:46:11.123085793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-rrzs2,Uid:5d3b013a-7876-459c-882d-e9f8c04bb711,Namespace:kube-system,Attempt:1,}" Feb 9 19:46:11.132252 systemd[1]: run-netns-cni\x2dc3dee656\x2de36e\x2d56ca\x2d5948\x2dc24309ee271c.mount: Deactivated successfully. Feb 9 19:46:11.138497 env[1210]: 2024-02-09 19:46:11.076 [INFO][4169] k8s.go 578: Cleaning up netns ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:46:11.138497 env[1210]: 2024-02-09 19:46:11.076 [INFO][4169] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" iface="eth0" netns="/var/run/netns/cni-c2661789-4e91-9ef8-3901-3838a3b28738" Feb 9 19:46:11.138497 env[1210]: 2024-02-09 19:46:11.077 [INFO][4169] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" iface="eth0" netns="/var/run/netns/cni-c2661789-4e91-9ef8-3901-3838a3b28738" Feb 9 19:46:11.138497 env[1210]: 2024-02-09 19:46:11.077 [INFO][4169] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" iface="eth0" netns="/var/run/netns/cni-c2661789-4e91-9ef8-3901-3838a3b28738" Feb 9 19:46:11.138497 env[1210]: 2024-02-09 19:46:11.077 [INFO][4169] k8s.go 585: Releasing IP address(es) ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:46:11.138497 env[1210]: 2024-02-09 19:46:11.077 [INFO][4169] utils.go 188: Calico CNI releasing IP address ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:46:11.138497 env[1210]: 2024-02-09 19:46:11.112 [INFO][4182] ipam_plugin.go 415: Releasing address using handleID ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" HandleID="k8s-pod-network.2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Workload="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:11.138497 env[1210]: 2024-02-09 19:46:11.112 [INFO][4182] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:11.138497 env[1210]: 2024-02-09 19:46:11.119 [INFO][4182] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:11.138497 env[1210]: 2024-02-09 19:46:11.128 [WARNING][4182] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" HandleID="k8s-pod-network.2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Workload="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:11.138497 env[1210]: 2024-02-09 19:46:11.128 [INFO][4182] ipam_plugin.go 443: Releasing address using workloadID ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" HandleID="k8s-pod-network.2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Workload="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:11.138497 env[1210]: 2024-02-09 19:46:11.133 [INFO][4182] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:11.138497 env[1210]: 2024-02-09 19:46:11.135 [INFO][4169] k8s.go 591: Teardown processing complete. ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:46:11.139246 env[1210]: time="2024-02-09T19:46:11.139199973Z" level=info msg="TearDown network for sandbox \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\" successfully" Feb 9 19:46:11.139338 env[1210]: time="2024-02-09T19:46:11.139314909Z" level=info msg="StopPodSandbox for \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\" returns successfully" Feb 9 19:46:11.140223 kubelet[2166]: E0209 19:46:11.139728 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:11.140570 env[1210]: time="2024-02-09T19:46:11.140546841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-mlq8f,Uid:109b6767-1ceb-4335-86d1-e08458d36dc3,Namespace:kube-system,Attempt:1,}" Feb 9 19:46:11.142441 systemd[1]: run-netns-cni\x2dc2661789\x2d4e91\x2d9ef8\x2d3901\x2d3838a3b28738.mount: Deactivated successfully. Feb 9 19:46:11.305173 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali32b18dbcd9e: link becomes ready Feb 9 19:46:11.301491 systemd-networkd[1085]: cali32b18dbcd9e: Link UP Feb 9 19:46:11.303258 systemd-networkd[1085]: cali32b18dbcd9e: Gained carrier Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.202 [INFO][4207] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--787d4945fb--mlq8f-eth0 coredns-787d4945fb- kube-system 109b6767-1ceb-4335-86d1-e08458d36dc3 874 0 2024-02-09 19:45:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-787d4945fb-mlq8f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali32b18dbcd9e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" Namespace="kube-system" Pod="coredns-787d4945fb-mlq8f" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--mlq8f-" Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.203 [INFO][4207] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" Namespace="kube-system" Pod="coredns-787d4945fb-mlq8f" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.232 [INFO][4222] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" HandleID="k8s-pod-network.2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" Workload="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.242 [INFO][4222] ipam_plugin.go 268: Auto assigning IP ContainerID="2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" HandleID="k8s-pod-network.2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" Workload="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051d30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-787d4945fb-mlq8f", "timestamp":"2024-02-09 19:46:11.232723412 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.242 [INFO][4222] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.242 [INFO][4222] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.242 [INFO][4222] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.244 [INFO][4222] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" host="localhost" Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.257 [INFO][4222] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.263 [INFO][4222] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.266 [INFO][4222] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.268 [INFO][4222] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.268 [INFO][4222] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" host="localhost" Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.269 [INFO][4222] ipam.go 1682: Creating new handle: k8s-pod-network.2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236 Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.272 [INFO][4222] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" host="localhost" Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.294 [INFO][4222] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" host="localhost" Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.294 [INFO][4222] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" host="localhost" Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.295 [INFO][4222] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:11.314129 env[1210]: 2024-02-09 19:46:11.295 [INFO][4222] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" HandleID="k8s-pod-network.2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" Workload="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:11.314727 env[1210]: 2024-02-09 19:46:11.298 [INFO][4207] k8s.go 385: Populated endpoint ContainerID="2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" Namespace="kube-system" Pod="coredns-787d4945fb-mlq8f" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--mlq8f-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"109b6767-1ceb-4335-86d1-e08458d36dc3", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-787d4945fb-mlq8f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32b18dbcd9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:11.314727 env[1210]: 2024-02-09 19:46:11.298 [INFO][4207] k8s.go 386: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" Namespace="kube-system" Pod="coredns-787d4945fb-mlq8f" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:11.314727 env[1210]: 2024-02-09 19:46:11.298 [INFO][4207] dataplane_linux.go 68: Setting the host side veth name to cali32b18dbcd9e ContainerID="2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" Namespace="kube-system" Pod="coredns-787d4945fb-mlq8f" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:11.314727 env[1210]: 2024-02-09 19:46:11.304 [INFO][4207] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" Namespace="kube-system" Pod="coredns-787d4945fb-mlq8f" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:11.314727 env[1210]: 2024-02-09 19:46:11.304 [INFO][4207] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" Namespace="kube-system" Pod="coredns-787d4945fb-mlq8f" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--mlq8f-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"109b6767-1ceb-4335-86d1-e08458d36dc3", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236", Pod:"coredns-787d4945fb-mlq8f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32b18dbcd9e", MAC:"e6:c4:a2:a4:35:a6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:11.314727 env[1210]: 2024-02-09 19:46:11.311 [INFO][4207] k8s.go 491: Wrote updated endpoint to datastore ContainerID="2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236" Namespace="kube-system" Pod="coredns-787d4945fb-mlq8f" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:11.332000 audit[4256]: NETFILTER_CFG table=filter:120 family=2 entries=40 op=nft_register_chain pid=4256 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:11.332000 audit[4256]: SYSCALL arch=c000003e syscall=46 success=yes exit=21096 a0=3 a1=7fffe9e3cfc0 a2=0 a3=7fffe9e3cfac items=0 ppid=3797 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:11.332000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:11.337256 env[1210]: time="2024-02-09T19:46:11.337050684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:11.337256 env[1210]: time="2024-02-09T19:46:11.337092644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:11.337256 env[1210]: time="2024-02-09T19:46:11.337105137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:11.338051 env[1210]: time="2024-02-09T19:46:11.337611737Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236 pid=4262 runtime=io.containerd.runc.v2 Feb 9 19:46:11.372047 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9316f713663: link becomes ready Feb 9 19:46:11.373505 systemd-resolved[1137]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:46:11.374020 systemd-networkd[1085]: cali9316f713663: Link UP Feb 9 19:46:11.382665 systemd-networkd[1085]: cali9316f713663: Gained carrier Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.192 [INFO][4197] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--787d4945fb--rrzs2-eth0 coredns-787d4945fb- kube-system 5d3b013a-7876-459c-882d-e9f8c04bb711 875 0 2024-02-09 19:45:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-787d4945fb-rrzs2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9316f713663 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" Namespace="kube-system" Pod="coredns-787d4945fb-rrzs2" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--rrzs2-" Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.192 [INFO][4197] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" Namespace="kube-system" Pod="coredns-787d4945fb-rrzs2" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.256 [INFO][4227] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" HandleID="k8s-pod-network.4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" Workload="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.272 [INFO][4227] ipam_plugin.go 268: Auto assigning IP ContainerID="4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" HandleID="k8s-pod-network.4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" Workload="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5910), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-787d4945fb-rrzs2", "timestamp":"2024-02-09 19:46:11.25618487 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.272 [INFO][4227] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.295 [INFO][4227] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.295 [INFO][4227] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.309 [INFO][4227] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" host="localhost" Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.335 [INFO][4227] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.342 [INFO][4227] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.345 [INFO][4227] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.350 [INFO][4227] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.350 [INFO][4227] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" host="localhost" Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.353 [INFO][4227] ipam.go 1682: Creating new handle: k8s-pod-network.4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.359 [INFO][4227] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" host="localhost" Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.364 [INFO][4227] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" host="localhost" Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.364 [INFO][4227] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" host="localhost" Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.364 [INFO][4227] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:11.390371 env[1210]: 2024-02-09 19:46:11.365 [INFO][4227] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" HandleID="k8s-pod-network.4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" Workload="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:11.391156 env[1210]: 2024-02-09 19:46:11.368 [INFO][4197] k8s.go 385: Populated endpoint ContainerID="4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" Namespace="kube-system" Pod="coredns-787d4945fb-rrzs2" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--rrzs2-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"5d3b013a-7876-459c-882d-e9f8c04bb711", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-787d4945fb-rrzs2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9316f713663", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:11.391156 env[1210]: 2024-02-09 19:46:11.368 [INFO][4197] k8s.go 386: Calico CNI using IPs: [192.168.88.131/32] ContainerID="4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" Namespace="kube-system" Pod="coredns-787d4945fb-rrzs2" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:11.391156 env[1210]: 2024-02-09 19:46:11.368 [INFO][4197] dataplane_linux.go 68: Setting the host side veth name to cali9316f713663 ContainerID="4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" Namespace="kube-system" Pod="coredns-787d4945fb-rrzs2" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:11.391156 env[1210]: 2024-02-09 19:46:11.380 [INFO][4197] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" Namespace="kube-system" Pod="coredns-787d4945fb-rrzs2" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:11.391156 env[1210]: 2024-02-09 19:46:11.380 [INFO][4197] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" Namespace="kube-system" Pod="coredns-787d4945fb-rrzs2" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--rrzs2-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"5d3b013a-7876-459c-882d-e9f8c04bb711", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f", Pod:"coredns-787d4945fb-rrzs2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9316f713663", MAC:"b6:1f:d1:97:cd:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:11.391156 env[1210]: 2024-02-09 19:46:11.386 [INFO][4197] k8s.go 491: Wrote updated endpoint to datastore ContainerID="4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f" Namespace="kube-system" Pod="coredns-787d4945fb-rrzs2" WorkloadEndpoint="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:11.401000 audit[4307]: NETFILTER_CFG table=filter:121 family=2 entries=34 op=nft_register_chain pid=4307 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:11.401000 audit[4307]: SYSCALL arch=c000003e syscall=46 success=yes exit=17900 a0=3 a1=7fff998d7490 a2=0 a3=7fff998d747c items=0 ppid=3797 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:11.401000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:11.407725 env[1210]: time="2024-02-09T19:46:11.407673887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-mlq8f,Uid:109b6767-1ceb-4335-86d1-e08458d36dc3,Namespace:kube-system,Attempt:1,} returns sandbox id \"2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236\"" Feb 9 19:46:11.408694 kubelet[2166]: E0209 19:46:11.408671 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:11.410651 env[1210]: time="2024-02-09T19:46:11.410619185Z" level=info msg="CreateContainer within sandbox \"2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:46:11.415462 env[1210]: time="2024-02-09T19:46:11.415373186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:11.415745 env[1210]: time="2024-02-09T19:46:11.415443238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:11.415821 env[1210]: time="2024-02-09T19:46:11.415629787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:11.415992 env[1210]: time="2024-02-09T19:46:11.415967001Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f pid=4320 runtime=io.containerd.runc.v2 Feb 9 19:46:11.423058 env[1210]: time="2024-02-09T19:46:11.423013915Z" level=info msg="CreateContainer within sandbox \"2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5180fceeabd874c9c5b409aeb49d9ca838c26877c4cf40846ef49c7160f775d4\"" Feb 9 19:46:11.425412 env[1210]: time="2024-02-09T19:46:11.423750316Z" level=info msg="StartContainer for \"5180fceeabd874c9c5b409aeb49d9ca838c26877c4cf40846ef49c7160f775d4\"" Feb 9 19:46:11.439130 systemd-resolved[1137]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:46:11.467061 env[1210]: time="2024-02-09T19:46:11.467014547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-rrzs2,Uid:5d3b013a-7876-459c-882d-e9f8c04bb711,Namespace:kube-system,Attempt:1,} returns sandbox id \"4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f\"" Feb 9 19:46:11.467574 kubelet[2166]: E0209 19:46:11.467549 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:11.468955 env[1210]: time="2024-02-09T19:46:11.468930934Z" level=info msg="CreateContainer within sandbox \"4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:46:11.476221 env[1210]: time="2024-02-09T19:46:11.476172122Z" level=info msg="StartContainer for \"5180fceeabd874c9c5b409aeb49d9ca838c26877c4cf40846ef49c7160f775d4\" returns successfully" Feb 9 19:46:11.491203 env[1210]: time="2024-02-09T19:46:11.490973919Z" level=info msg="CreateContainer within sandbox \"4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ed772764af0c2bf64162d8c4c11959e7eee7a04b064fcba6a7d5a1a58699fad2\"" Feb 9 19:46:11.491578 env[1210]: time="2024-02-09T19:46:11.491551263Z" level=info msg="StartContainer for \"ed772764af0c2bf64162d8c4c11959e7eee7a04b064fcba6a7d5a1a58699fad2\"" Feb 9 19:46:11.536487 env[1210]: time="2024-02-09T19:46:11.536445472Z" level=info msg="StartContainer for \"ed772764af0c2bf64162d8c4c11959e7eee7a04b064fcba6a7d5a1a58699fad2\" returns successfully" Feb 9 19:46:11.655787 systemd-networkd[1085]: cali599f826e24c: Gained IPv6LL Feb 9 19:46:12.168655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2735746654.mount: Deactivated successfully. Feb 9 19:46:12.186134 kubelet[2166]: E0209 19:46:12.184878 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:12.188470 kubelet[2166]: E0209 19:46:12.187469 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:12.201104 kubelet[2166]: I0209 19:46:12.201069 2166 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-mlq8f" podStartSLOduration=38.201031688 pod.CreationTimestamp="2024-02-09 19:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:12.200583217 +0000 UTC m=+51.293741543" watchObservedRunningTime="2024-02-09 19:46:12.201031688 +0000 UTC m=+51.294190005" Feb 9 19:46:12.223298 kubelet[2166]: I0209 19:46:12.223262 2166 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-rrzs2" podStartSLOduration=38.223225876 pod.CreationTimestamp="2024-02-09 19:45:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:12.222928668 +0000 UTC m=+51.316086964" watchObservedRunningTime="2024-02-09 19:46:12.223225876 +0000 UTC m=+51.316384173" Feb 9 19:46:12.259000 audit[4456]: NETFILTER_CFG table=filter:122 family=2 entries=12 op=nft_register_rule pid=4456 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:12.259000 audit[4456]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7ffcf1e38790 a2=0 a3=7ffcf1e3877c items=0 ppid=2345 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:12.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:12.259000 audit[4456]: NETFILTER_CFG table=nat:123 family=2 entries=30 op=nft_register_rule pid=4456 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:12.259000 audit[4456]: SYSCALL arch=c000003e syscall=46 success=yes exit=8836 a0=3 a1=7ffcf1e38790 a2=0 a3=7ffcf1e3877c items=0 ppid=2345 pid=4456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:12.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:12.300000 audit[4482]: NETFILTER_CFG table=filter:124 family=2 entries=9 op=nft_register_rule pid=4482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:12.300000 audit[4482]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffe12a4a600 a2=0 a3=7ffe12a4a5ec items=0 ppid=2345 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:12.300000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:12.304000 audit[4482]: NETFILTER_CFG table=nat:125 family=2 entries=63 op=nft_register_chain pid=4482 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:12.304000 audit[4482]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffe12a4a600 a2=0 a3=7ffe12a4a5ec items=0 ppid=2345 pid=4482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:12.304000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:12.612552 systemd-networkd[1085]: cali9316f713663: Gained IPv6LL Feb 9 19:46:12.726464 systemd[1]: Started sshd@10-10.0.0.60:22-10.0.0.1:41768.service. Feb 9 19:46:12.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.60:22-10.0.0.1:41768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:12.730348 kernel: kauditd_printk_skb: 140 callbacks suppressed Feb 9 19:46:12.730413 kernel: audit: type=1130 audit(1707507972.725:345): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.60:22-10.0.0.1:41768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:12.768000 audit[4491]: USER_ACCT pid=4491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:12.770041 sshd[4491]: Accepted publickey for core from 10.0.0.1 port 41768 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:12.771898 sshd[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:12.770000 audit[4491]: CRED_ACQ pid=4491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:12.775055 kernel: audit: type=1101 audit(1707507972.768:346): pid=4491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:12.775096 kernel: audit: type=1103 audit(1707507972.770:347): pid=4491 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:12.775525 systemd-logind[1192]: New session 11 of user core. Feb 9 19:46:12.776329 systemd[1]: Started session-11.scope. Feb 9 19:46:12.776811 kernel: audit: type=1006 audit(1707507972.770:348): pid=4491 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 9 19:46:12.776871 kernel: audit: type=1300 audit(1707507972.770:348): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc33e7afc0 a2=3 a3=0 items=0 ppid=1 pid=4491 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:12.770000 audit[4491]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc33e7afc0 a2=3 a3=0 items=0 ppid=1 pid=4491 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:12.780613 kernel: audit: type=1327 audit(1707507972.770:348): proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:12.770000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:12.779000 audit[4491]: USER_START pid=4491 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:12.784296 kernel: audit: type=1105 audit(1707507972.779:349): pid=4491 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:12.784417 kernel: audit: type=1103 audit(1707507972.781:350): pid=4494 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:12.781000 audit[4494]: CRED_ACQ pid=4494 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:12.899128 sshd[4491]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:12.898000 audit[4491]: USER_END pid=4491 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:12.901722 systemd[1]: sshd@10-10.0.0.60:22-10.0.0.1:41768.service: Deactivated successfully. Feb 9 19:46:12.902813 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:46:12.903863 systemd-logind[1192]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:46:12.898000 audit[4491]: CRED_DISP pid=4491 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:12.905593 systemd-logind[1192]: Removed session 11. Feb 9 19:46:12.912256 kernel: audit: type=1106 audit(1707507972.898:351): pid=4491 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:12.912425 kernel: audit: type=1104 audit(1707507972.898:352): pid=4491 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:12.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.60:22-10.0.0.1:41768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:13.081800 env[1210]: time="2024-02-09T19:46:13.081746138Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:13.084249 env[1210]: time="2024-02-09T19:46:13.084214429Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:13.086743 env[1210]: time="2024-02-09T19:46:13.086706526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:13.088506 env[1210]: time="2024-02-09T19:46:13.088472139Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:13.089113 env[1210]: time="2024-02-09T19:46:13.089087674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:91c1c91da7602f16686c149419195b486669f3a1828fd320cf332fdc6a25297d\"" Feb 9 19:46:13.090820 env[1210]: time="2024-02-09T19:46:13.090787514Z" level=info msg="CreateContainer within sandbox \"046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 19:46:13.105056 env[1210]: time="2024-02-09T19:46:13.105004330Z" level=info msg="CreateContainer within sandbox \"046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d1acd4954deb3cec4737ad14464f732be28081602f24b36997eea1d0954b7c69\"" Feb 9 19:46:13.105482 env[1210]: time="2024-02-09T19:46:13.105448905Z" level=info msg="StartContainer for \"d1acd4954deb3cec4737ad14464f732be28081602f24b36997eea1d0954b7c69\"" Feb 9 19:46:13.152244 env[1210]: time="2024-02-09T19:46:13.152149725Z" level=info msg="StartContainer for \"d1acd4954deb3cec4737ad14464f732be28081602f24b36997eea1d0954b7c69\" returns successfully" Feb 9 19:46:13.153505 env[1210]: time="2024-02-09T19:46:13.153484960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 19:46:13.189588 systemd-networkd[1085]: cali32b18dbcd9e: Gained IPv6LL Feb 9 19:46:13.190216 kubelet[2166]: E0209 19:46:13.190193 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:13.190792 kubelet[2166]: E0209 19:46:13.190777 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:14.028687 env[1210]: time="2024-02-09T19:46:14.028643788Z" level=info msg="StopPodSandbox for \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\"" Feb 9 19:46:14.092963 env[1210]: 2024-02-09 19:46:14.064 [INFO][4555] k8s.go 578: Cleaning up netns ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:46:14.092963 env[1210]: 2024-02-09 19:46:14.065 [INFO][4555] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" iface="eth0" netns="/var/run/netns/cni-cb354422-09e2-4103-1e4b-d0bda96b380b" Feb 9 19:46:14.092963 env[1210]: 2024-02-09 19:46:14.065 [INFO][4555] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" iface="eth0" netns="/var/run/netns/cni-cb354422-09e2-4103-1e4b-d0bda96b380b" Feb 9 19:46:14.092963 env[1210]: 2024-02-09 19:46:14.065 [INFO][4555] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" iface="eth0" netns="/var/run/netns/cni-cb354422-09e2-4103-1e4b-d0bda96b380b" Feb 9 19:46:14.092963 env[1210]: 2024-02-09 19:46:14.065 [INFO][4555] k8s.go 585: Releasing IP address(es) ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:46:14.092963 env[1210]: 2024-02-09 19:46:14.065 [INFO][4555] utils.go 188: Calico CNI releasing IP address ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:46:14.092963 env[1210]: 2024-02-09 19:46:14.083 [INFO][4563] ipam_plugin.go 415: Releasing address using handleID ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" HandleID="k8s-pod-network.cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Workload="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:14.092963 env[1210]: 2024-02-09 19:46:14.083 [INFO][4563] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:14.092963 env[1210]: 2024-02-09 19:46:14.083 [INFO][4563] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:14.092963 env[1210]: 2024-02-09 19:46:14.089 [WARNING][4563] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" HandleID="k8s-pod-network.cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Workload="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:14.092963 env[1210]: 2024-02-09 19:46:14.089 [INFO][4563] ipam_plugin.go 443: Releasing address using workloadID ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" HandleID="k8s-pod-network.cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Workload="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:14.092963 env[1210]: 2024-02-09 19:46:14.090 [INFO][4563] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:14.092963 env[1210]: 2024-02-09 19:46:14.091 [INFO][4555] k8s.go 591: Teardown processing complete. ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:46:14.093625 env[1210]: time="2024-02-09T19:46:14.093197692Z" level=info msg="TearDown network for sandbox \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\" successfully" Feb 9 19:46:14.093625 env[1210]: time="2024-02-09T19:46:14.093231245Z" level=info msg="StopPodSandbox for \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\" returns successfully" Feb 9 19:46:14.093803 env[1210]: time="2024-02-09T19:46:14.093777911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54744cf5f8-xqcbp,Uid:eab19472-8749-4447-8ba1-c4bcd69e7ff4,Namespace:calico-system,Attempt:1,}" Feb 9 19:46:14.095087 systemd[1]: run-netns-cni\x2dcb354422\x2d09e2\x2d4103\x2d1e4b\x2dd0bda96b380b.mount: Deactivated successfully. Feb 9 19:46:14.187221 systemd-networkd[1085]: cali056db815c92: Link UP Feb 9 19:46:14.188541 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:46:14.188579 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali056db815c92: link becomes ready Feb 9 19:46:14.188345 systemd-networkd[1085]: cali056db815c92: Gained carrier Feb 9 19:46:14.192497 kubelet[2166]: E0209 19:46:14.191865 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:14.192497 kubelet[2166]: E0209 19:46:14.192453 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.133 [INFO][4570] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0 calico-kube-controllers-54744cf5f8- calico-system eab19472-8749-4447-8ba1-c4bcd69e7ff4 927 0 2024-02-09 19:45:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54744cf5f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54744cf5f8-xqcbp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali056db815c92 [] []}} ContainerID="a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" Namespace="calico-system" Pod="calico-kube-controllers-54744cf5f8-xqcbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-" Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.133 [INFO][4570] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" Namespace="calico-system" Pod="calico-kube-controllers-54744cf5f8-xqcbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.155 [INFO][4583] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" HandleID="k8s-pod-network.a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" Workload="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.164 [INFO][4583] ipam_plugin.go 268: Auto assigning IP ContainerID="a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" HandleID="k8s-pod-network.a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" Workload="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025d990), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54744cf5f8-xqcbp", "timestamp":"2024-02-09 19:46:14.155043145 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.164 [INFO][4583] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.164 [INFO][4583] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.164 [INFO][4583] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.165 [INFO][4583] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" host="localhost" Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.168 [INFO][4583] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.171 [INFO][4583] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.172 [INFO][4583] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.174 [INFO][4583] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.174 [INFO][4583] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" host="localhost" Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.175 [INFO][4583] ipam.go 1682: Creating new handle: k8s-pod-network.a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.178 [INFO][4583] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" host="localhost" Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.182 [INFO][4583] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" host="localhost" Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.183 [INFO][4583] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" host="localhost" Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.183 [INFO][4583] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:14.202993 env[1210]: 2024-02-09 19:46:14.183 [INFO][4583] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" HandleID="k8s-pod-network.a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" Workload="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:14.203584 env[1210]: 2024-02-09 19:46:14.185 [INFO][4570] k8s.go 385: Populated endpoint ContainerID="a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" Namespace="calico-system" Pod="calico-kube-controllers-54744cf5f8-xqcbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0", GenerateName:"calico-kube-controllers-54744cf5f8-", Namespace:"calico-system", SelfLink:"", UID:"eab19472-8749-4447-8ba1-c4bcd69e7ff4", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54744cf5f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54744cf5f8-xqcbp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali056db815c92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:14.203584 env[1210]: 2024-02-09 19:46:14.185 [INFO][4570] k8s.go 386: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" Namespace="calico-system" Pod="calico-kube-controllers-54744cf5f8-xqcbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:14.203584 env[1210]: 2024-02-09 19:46:14.185 [INFO][4570] dataplane_linux.go 68: Setting the host side veth name to cali056db815c92 ContainerID="a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" Namespace="calico-system" Pod="calico-kube-controllers-54744cf5f8-xqcbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:14.203584 env[1210]: 2024-02-09 19:46:14.190 [INFO][4570] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" Namespace="calico-system" Pod="calico-kube-controllers-54744cf5f8-xqcbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:14.203584 env[1210]: 2024-02-09 19:46:14.192 [INFO][4570] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" Namespace="calico-system" Pod="calico-kube-controllers-54744cf5f8-xqcbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0", GenerateName:"calico-kube-controllers-54744cf5f8-", Namespace:"calico-system", SelfLink:"", UID:"eab19472-8749-4447-8ba1-c4bcd69e7ff4", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54744cf5f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d", Pod:"calico-kube-controllers-54744cf5f8-xqcbp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali056db815c92", MAC:"c6:ba:e4:cd:48:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:14.203584 env[1210]: 2024-02-09 19:46:14.199 [INFO][4570] k8s.go 491: Wrote updated endpoint to datastore ContainerID="a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d" Namespace="calico-system" Pod="calico-kube-controllers-54744cf5f8-xqcbp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:14.216000 audit[4608]: NETFILTER_CFG table=filter:126 family=2 entries=42 op=nft_register_chain pid=4608 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:46:14.216000 audit[4608]: SYSCALL arch=c000003e syscall=46 success=yes exit=20696 a0=3 a1=7ffd8e24a980 a2=0 a3=7ffd8e24a96c items=0 ppid=3797 pid=4608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:14.216000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:46:14.220451 env[1210]: time="2024-02-09T19:46:14.219267742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:46:14.220451 env[1210]: time="2024-02-09T19:46:14.219325180Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:46:14.220451 env[1210]: time="2024-02-09T19:46:14.219346049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:46:14.220451 env[1210]: time="2024-02-09T19:46:14.219560771Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d pid=4611 runtime=io.containerd.runc.v2 Feb 9 19:46:14.239147 systemd[1]: run-containerd-runc-k8s.io-a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d-runc.cdT5NM.mount: Deactivated successfully. Feb 9 19:46:14.251960 systemd-resolved[1137]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:46:14.274687 env[1210]: time="2024-02-09T19:46:14.274648064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54744cf5f8-xqcbp,Uid:eab19472-8749-4447-8ba1-c4bcd69e7ff4,Namespace:calico-system,Attempt:1,} returns sandbox id \"a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d\"" Feb 9 19:46:15.390330 env[1210]: time="2024-02-09T19:46:15.390274861Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:15.392075 env[1210]: time="2024-02-09T19:46:15.392014516Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:15.393267 env[1210]: time="2024-02-09T19:46:15.393224135Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:15.394768 env[1210]: time="2024-02-09T19:46:15.394732185Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:15.395245 env[1210]: time="2024-02-09T19:46:15.395208789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:d36ef67f7b24c4facd86d0bc06b0cd907431a822dee695eb06b86a905bff85d4\"" Feb 9 19:46:15.395802 env[1210]: time="2024-02-09T19:46:15.395782174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 19:46:15.396905 env[1210]: time="2024-02-09T19:46:15.396877630Z" level=info msg="CreateContainer within sandbox \"046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 19:46:15.408422 env[1210]: time="2024-02-09T19:46:15.408354944Z" level=info msg="CreateContainer within sandbox \"046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f47547f1baf8b781f4f47813bdc530a13bd0d5d89ca41acc29a5207ea9cdbfdb\"" Feb 9 19:46:15.408872 env[1210]: time="2024-02-09T19:46:15.408848611Z" level=info msg="StartContainer for \"f47547f1baf8b781f4f47813bdc530a13bd0d5d89ca41acc29a5207ea9cdbfdb\"" Feb 9 19:46:15.429884 systemd[1]: run-containerd-runc-k8s.io-f47547f1baf8b781f4f47813bdc530a13bd0d5d89ca41acc29a5207ea9cdbfdb-runc.cgcVzb.mount: Deactivated successfully. Feb 9 19:46:15.457986 env[1210]: time="2024-02-09T19:46:15.456968786Z" level=info msg="StartContainer for \"f47547f1baf8b781f4f47813bdc530a13bd0d5d89ca41acc29a5207ea9cdbfdb\" returns successfully" Feb 9 19:46:16.090724 kubelet[2166]: I0209 19:46:16.090609 2166 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 19:46:16.091081 kubelet[2166]: I0209 19:46:16.090733 2166 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 19:46:16.199707 systemd-networkd[1085]: cali056db815c92: Gained IPv6LL Feb 9 19:46:17.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.60:22-10.0.0.1:41778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:17.902413 systemd[1]: Started sshd@11-10.0.0.60:22-10.0.0.1:41778.service. Feb 9 19:46:17.920329 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 9 19:46:17.920401 kernel: audit: type=1130 audit(1707507977.901:355): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.60:22-10.0.0.1:41778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:17.956000 audit[4689]: USER_ACCT pid=4689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:17.958058 sshd[4689]: Accepted publickey for core from 10.0.0.1 port 41778 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:17.958000 audit[4689]: CRED_ACQ pid=4689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:17.974599 kernel: audit: type=1101 audit(1707507977.956:356): pid=4689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:17.974645 kernel: audit: type=1103 audit(1707507977.958:357): pid=4689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:17.974661 kernel: audit: type=1006 audit(1707507977.958:358): pid=4689 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Feb 9 19:46:17.958000 audit[4689]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7d89c0d0 a2=3 a3=0 items=0 ppid=1 pid=4689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:17.978695 kernel: audit: type=1300 audit(1707507977.958:358): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff7d89c0d0 a2=3 a3=0 items=0 ppid=1 pid=4689 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:17.978748 kernel: audit: type=1327 audit(1707507977.958:358): proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:17.958000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:17.979524 sshd[4689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:17.983159 systemd-logind[1192]: New session 12 of user core. Feb 9 19:46:17.984074 systemd[1]: Started session-12.scope. Feb 9 19:46:17.987000 audit[4689]: USER_START pid=4689 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:17.987000 audit[4692]: CRED_ACQ pid=4692 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:17.994069 kernel: audit: type=1105 audit(1707507977.987:359): pid=4689 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:17.994105 kernel: audit: type=1103 audit(1707507977.987:360): pid=4692 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:18.139713 sshd[4689]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:18.139000 audit[4689]: USER_END pid=4689 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:18.141906 systemd[1]: sshd@11-10.0.0.60:22-10.0.0.1:41778.service: Deactivated successfully. Feb 9 19:46:18.142848 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:46:18.139000 audit[4689]: CRED_DISP pid=4689 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:18.146074 kernel: audit: type=1106 audit(1707507978.139:361): pid=4689 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:18.146144 kernel: audit: type=1104 audit(1707507978.139:362): pid=4689 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:18.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.60:22-10.0.0.1:41778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:18.146616 systemd-logind[1192]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:46:18.147448 systemd-logind[1192]: Removed session 12. Feb 9 19:46:20.997862 env[1210]: time="2024-02-09T19:46:20.997807326Z" level=info msg="StopPodSandbox for \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\"" Feb 9 19:46:21.234948 env[1210]: 2024-02-09 19:46:21.206 [WARNING][4723] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--mlq8f-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"109b6767-1ceb-4335-86d1-e08458d36dc3", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236", Pod:"coredns-787d4945fb-mlq8f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32b18dbcd9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:21.234948 env[1210]: 2024-02-09 19:46:21.206 [INFO][4723] k8s.go 578: Cleaning up netns ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:46:21.234948 env[1210]: 2024-02-09 19:46:21.206 [INFO][4723] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" iface="eth0" netns="" Feb 9 19:46:21.234948 env[1210]: 2024-02-09 19:46:21.206 [INFO][4723] k8s.go 585: Releasing IP address(es) ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:46:21.234948 env[1210]: 2024-02-09 19:46:21.206 [INFO][4723] utils.go 188: Calico CNI releasing IP address ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:46:21.234948 env[1210]: 2024-02-09 19:46:21.224 [INFO][4732] ipam_plugin.go 415: Releasing address using handleID ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" HandleID="k8s-pod-network.2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Workload="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:21.234948 env[1210]: 2024-02-09 19:46:21.224 [INFO][4732] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:21.234948 env[1210]: 2024-02-09 19:46:21.224 [INFO][4732] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:21.234948 env[1210]: 2024-02-09 19:46:21.230 [WARNING][4732] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" HandleID="k8s-pod-network.2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Workload="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:21.234948 env[1210]: 2024-02-09 19:46:21.230 [INFO][4732] ipam_plugin.go 443: Releasing address using workloadID ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" HandleID="k8s-pod-network.2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Workload="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:21.234948 env[1210]: 2024-02-09 19:46:21.231 [INFO][4732] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:21.234948 env[1210]: 2024-02-09 19:46:21.233 [INFO][4723] k8s.go 591: Teardown processing complete. ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:46:21.235422 env[1210]: time="2024-02-09T19:46:21.234994714Z" level=info msg="TearDown network for sandbox \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\" successfully" Feb 9 19:46:21.235422 env[1210]: time="2024-02-09T19:46:21.235048765Z" level=info msg="StopPodSandbox for \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\" returns successfully" Feb 9 19:46:21.235739 env[1210]: time="2024-02-09T19:46:21.235708472Z" level=info msg="RemovePodSandbox for \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\"" Feb 9 19:46:21.235849 env[1210]: time="2024-02-09T19:46:21.235808000Z" level=info msg="Forcibly stopping sandbox \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\"" Feb 9 19:46:21.309383 env[1210]: 2024-02-09 19:46:21.266 [WARNING][4754] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--mlq8f-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"109b6767-1ceb-4335-86d1-e08458d36dc3", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d2b8b7d427a324df0eaca55e2209cb3d5973becbce9890e56aced2177284236", Pod:"coredns-787d4945fb-mlq8f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali32b18dbcd9e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:21.309383 env[1210]: 2024-02-09 19:46:21.266 [INFO][4754] k8s.go 578: Cleaning up netns ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:46:21.309383 env[1210]: 2024-02-09 19:46:21.266 [INFO][4754] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" iface="eth0" netns="" Feb 9 19:46:21.309383 env[1210]: 2024-02-09 19:46:21.266 [INFO][4754] k8s.go 585: Releasing IP address(es) ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:46:21.309383 env[1210]: 2024-02-09 19:46:21.266 [INFO][4754] utils.go 188: Calico CNI releasing IP address ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:46:21.309383 env[1210]: 2024-02-09 19:46:21.297 [INFO][4761] ipam_plugin.go 415: Releasing address using handleID ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" HandleID="k8s-pod-network.2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Workload="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:21.309383 env[1210]: 2024-02-09 19:46:21.298 [INFO][4761] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:21.309383 env[1210]: 2024-02-09 19:46:21.298 [INFO][4761] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:21.309383 env[1210]: 2024-02-09 19:46:21.304 [WARNING][4761] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" HandleID="k8s-pod-network.2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Workload="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:21.309383 env[1210]: 2024-02-09 19:46:21.304 [INFO][4761] ipam_plugin.go 443: Releasing address using workloadID ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" HandleID="k8s-pod-network.2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Workload="localhost-k8s-coredns--787d4945fb--mlq8f-eth0" Feb 9 19:46:21.309383 env[1210]: 2024-02-09 19:46:21.305 [INFO][4761] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:21.309383 env[1210]: 2024-02-09 19:46:21.307 [INFO][4754] k8s.go 591: Teardown processing complete. ContainerID="2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984" Feb 9 19:46:21.309881 env[1210]: time="2024-02-09T19:46:21.309845623Z" level=info msg="TearDown network for sandbox \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\" successfully" Feb 9 19:46:21.312928 env[1210]: time="2024-02-09T19:46:21.312893971Z" level=info msg="RemovePodSandbox \"2b736ddcff5455edb14bae3aa930c1067459567cc6e5d71cb64b58db76bea984\" returns successfully" Feb 9 19:46:21.313493 env[1210]: time="2024-02-09T19:46:21.313458370Z" level=info msg="StopPodSandbox for \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\"" Feb 9 19:46:21.313585 env[1210]: time="2024-02-09T19:46:21.313547297Z" level=info msg="TearDown network for sandbox \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\" successfully" Feb 9 19:46:21.313585 env[1210]: time="2024-02-09T19:46:21.313581832Z" level=info msg="StopPodSandbox for \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\" returns successfully" Feb 9 19:46:21.313947 env[1210]: time="2024-02-09T19:46:21.313925746Z" level=info msg="RemovePodSandbox for \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\"" Feb 9 19:46:21.314005 env[1210]: time="2024-02-09T19:46:21.313952166Z" level=info msg="Forcibly stopping sandbox \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\"" Feb 9 19:46:21.314052 env[1210]: time="2024-02-09T19:46:21.314036294Z" level=info msg="TearDown network for sandbox \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\" successfully" Feb 9 19:46:21.316971 env[1210]: time="2024-02-09T19:46:21.316942516Z" level=info msg="RemovePodSandbox \"005d18212d922d876c0bd4c9b159a0c4b94aee061b0917561b72731c8c71d8e3\" returns successfully" Feb 9 19:46:21.317196 env[1210]: time="2024-02-09T19:46:21.317173278Z" level=info msg="StopPodSandbox for \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\"" Feb 9 19:46:21.379309 env[1210]: 2024-02-09 19:46:21.347 [WARNING][4785] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0", GenerateName:"calico-kube-controllers-54744cf5f8-", Namespace:"calico-system", SelfLink:"", UID:"eab19472-8749-4447-8ba1-c4bcd69e7ff4", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54744cf5f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d", Pod:"calico-kube-controllers-54744cf5f8-xqcbp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali056db815c92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:21.379309 env[1210]: 2024-02-09 19:46:21.348 [INFO][4785] k8s.go 578: Cleaning up netns ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:46:21.379309 env[1210]: 2024-02-09 19:46:21.348 [INFO][4785] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" iface="eth0" netns="" Feb 9 19:46:21.379309 env[1210]: 2024-02-09 19:46:21.348 [INFO][4785] k8s.go 585: Releasing IP address(es) ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:46:21.379309 env[1210]: 2024-02-09 19:46:21.348 [INFO][4785] utils.go 188: Calico CNI releasing IP address ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:46:21.379309 env[1210]: 2024-02-09 19:46:21.365 [INFO][4792] ipam_plugin.go 415: Releasing address using handleID ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" HandleID="k8s-pod-network.cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Workload="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:21.379309 env[1210]: 2024-02-09 19:46:21.365 [INFO][4792] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:21.379309 env[1210]: 2024-02-09 19:46:21.365 [INFO][4792] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:21.379309 env[1210]: 2024-02-09 19:46:21.375 [WARNING][4792] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" HandleID="k8s-pod-network.cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Workload="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:21.379309 env[1210]: 2024-02-09 19:46:21.375 [INFO][4792] ipam_plugin.go 443: Releasing address using workloadID ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" HandleID="k8s-pod-network.cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Workload="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:21.379309 env[1210]: 2024-02-09 19:46:21.376 [INFO][4792] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:21.379309 env[1210]: 2024-02-09 19:46:21.377 [INFO][4785] k8s.go 591: Teardown processing complete. ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:46:21.379789 env[1210]: time="2024-02-09T19:46:21.379338020Z" level=info msg="TearDown network for sandbox \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\" successfully" Feb 9 19:46:21.379789 env[1210]: time="2024-02-09T19:46:21.379378556Z" level=info msg="StopPodSandbox for \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\" returns successfully" Feb 9 19:46:21.379954 env[1210]: time="2024-02-09T19:46:21.379902479Z" level=info msg="RemovePodSandbox for \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\"" Feb 9 19:46:21.380003 env[1210]: time="2024-02-09T19:46:21.379942103Z" level=info msg="Forcibly stopping sandbox \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\"" Feb 9 19:46:21.438063 env[1210]: 2024-02-09 19:46:21.409 [WARNING][4815] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0", GenerateName:"calico-kube-controllers-54744cf5f8-", Namespace:"calico-system", SelfLink:"", UID:"eab19472-8749-4447-8ba1-c4bcd69e7ff4", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54744cf5f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d", Pod:"calico-kube-controllers-54744cf5f8-xqcbp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali056db815c92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:21.438063 env[1210]: 2024-02-09 19:46:21.409 [INFO][4815] k8s.go 578: Cleaning up netns ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:46:21.438063 env[1210]: 2024-02-09 19:46:21.409 [INFO][4815] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" iface="eth0" netns="" Feb 9 19:46:21.438063 env[1210]: 2024-02-09 19:46:21.409 [INFO][4815] k8s.go 585: Releasing IP address(es) ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:46:21.438063 env[1210]: 2024-02-09 19:46:21.409 [INFO][4815] utils.go 188: Calico CNI releasing IP address ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:46:21.438063 env[1210]: 2024-02-09 19:46:21.426 [INFO][4822] ipam_plugin.go 415: Releasing address using handleID ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" HandleID="k8s-pod-network.cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Workload="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:21.438063 env[1210]: 2024-02-09 19:46:21.426 [INFO][4822] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:21.438063 env[1210]: 2024-02-09 19:46:21.426 [INFO][4822] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:21.438063 env[1210]: 2024-02-09 19:46:21.433 [WARNING][4822] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" HandleID="k8s-pod-network.cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Workload="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:21.438063 env[1210]: 2024-02-09 19:46:21.433 [INFO][4822] ipam_plugin.go 443: Releasing address using workloadID ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" HandleID="k8s-pod-network.cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Workload="localhost-k8s-calico--kube--controllers--54744cf5f8--xqcbp-eth0" Feb 9 19:46:21.438063 env[1210]: 2024-02-09 19:46:21.434 [INFO][4822] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:21.438063 env[1210]: 2024-02-09 19:46:21.436 [INFO][4815] k8s.go 591: Teardown processing complete. ContainerID="cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf" Feb 9 19:46:21.438643 env[1210]: time="2024-02-09T19:46:21.438089499Z" level=info msg="TearDown network for sandbox \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\" successfully" Feb 9 19:46:21.462248 env[1210]: time="2024-02-09T19:46:21.462191460Z" level=info msg="RemovePodSandbox \"cc02c7481f89d51f9163348edcbb5a2c51cfc30861221d2ddf9af8991d4e1bdf\" returns successfully" Feb 9 19:46:21.462687 env[1210]: time="2024-02-09T19:46:21.462658737Z" level=info msg="StopPodSandbox for \"3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6\"" Feb 9 19:46:21.462789 env[1210]: time="2024-02-09T19:46:21.462745630Z" level=info msg="TearDown network for sandbox \"3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6\" successfully" Feb 9 19:46:21.462789 env[1210]: time="2024-02-09T19:46:21.462781087Z" level=info msg="StopPodSandbox for \"3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6\" returns successfully" Feb 9 19:46:21.463056 env[1210]: time="2024-02-09T19:46:21.463027900Z" level=info msg="RemovePodSandbox for \"3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6\"" Feb 9 19:46:21.463056 env[1210]: time="2024-02-09T19:46:21.463049390Z" level=info msg="Forcibly stopping sandbox \"3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6\"" Feb 9 19:46:21.463198 env[1210]: time="2024-02-09T19:46:21.463098151Z" level=info msg="TearDown network for sandbox \"3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6\" successfully" Feb 9 19:46:21.474154 env[1210]: time="2024-02-09T19:46:21.474126028Z" level=info msg="RemovePodSandbox \"3ef36c54595bf74455426d518baee3419364e15c8d2a0703179cceb4ca5014b6\" returns successfully" Feb 9 19:46:21.474472 env[1210]: time="2024-02-09T19:46:21.474446269Z" level=info msg="StopPodSandbox for \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\"" Feb 9 19:46:21.537832 env[1210]: 2024-02-09 19:46:21.505 [WARNING][4844] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ngqgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"991e9420-1ee3-42d4-b3be-2ddc6b5f52db", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136", Pod:"csi-node-driver-ngqgb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali599f826e24c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:21.537832 env[1210]: 2024-02-09 19:46:21.505 [INFO][4844] k8s.go 578: Cleaning up netns ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:46:21.537832 env[1210]: 2024-02-09 19:46:21.505 [INFO][4844] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" iface="eth0" netns="" Feb 9 19:46:21.537832 env[1210]: 2024-02-09 19:46:21.505 [INFO][4844] k8s.go 585: Releasing IP address(es) ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:46:21.537832 env[1210]: 2024-02-09 19:46:21.505 [INFO][4844] utils.go 188: Calico CNI releasing IP address ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:46:21.537832 env[1210]: 2024-02-09 19:46:21.523 [INFO][4852] ipam_plugin.go 415: Releasing address using handleID ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" HandleID="k8s-pod-network.8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Workload="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:21.537832 env[1210]: 2024-02-09 19:46:21.523 [INFO][4852] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:21.537832 env[1210]: 2024-02-09 19:46:21.523 [INFO][4852] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:21.537832 env[1210]: 2024-02-09 19:46:21.530 [WARNING][4852] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" HandleID="k8s-pod-network.8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Workload="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:21.537832 env[1210]: 2024-02-09 19:46:21.530 [INFO][4852] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" HandleID="k8s-pod-network.8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Workload="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:21.537832 env[1210]: 2024-02-09 19:46:21.531 [INFO][4852] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:21.537832 env[1210]: 2024-02-09 19:46:21.536 [INFO][4844] k8s.go 591: Teardown processing complete. ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:46:21.538371 env[1210]: time="2024-02-09T19:46:21.537851337Z" level=info msg="TearDown network for sandbox \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\" successfully" Feb 9 19:46:21.538371 env[1210]: time="2024-02-09T19:46:21.537881043Z" level=info msg="StopPodSandbox for \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\" returns successfully" Feb 9 19:46:21.538371 env[1210]: time="2024-02-09T19:46:21.538303165Z" level=info msg="RemovePodSandbox for \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\"" Feb 9 19:46:21.538371 env[1210]: time="2024-02-09T19:46:21.538340414Z" level=info msg="Forcibly stopping sandbox \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\"" Feb 9 19:46:21.600696 env[1210]: 2024-02-09 19:46:21.572 [WARNING][4876] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ngqgb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"991e9420-1ee3-42d4-b3be-2ddc6b5f52db", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"046712ab999f2b738b92fc3e00bf53d0cd3bf3c5212ccd0b21f641ffa3103136", Pod:"csi-node-driver-ngqgb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali599f826e24c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:21.600696 env[1210]: 2024-02-09 19:46:21.573 [INFO][4876] k8s.go 578: Cleaning up netns ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:46:21.600696 env[1210]: 2024-02-09 19:46:21.573 [INFO][4876] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" iface="eth0" netns="" Feb 9 19:46:21.600696 env[1210]: 2024-02-09 19:46:21.573 [INFO][4876] k8s.go 585: Releasing IP address(es) ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:46:21.600696 env[1210]: 2024-02-09 19:46:21.573 [INFO][4876] utils.go 188: Calico CNI releasing IP address ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:46:21.600696 env[1210]: 2024-02-09 19:46:21.590 [INFO][4883] ipam_plugin.go 415: Releasing address using handleID ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" HandleID="k8s-pod-network.8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Workload="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:21.600696 env[1210]: 2024-02-09 19:46:21.590 [INFO][4883] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:21.600696 env[1210]: 2024-02-09 19:46:21.590 [INFO][4883] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:21.600696 env[1210]: 2024-02-09 19:46:21.596 [WARNING][4883] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" HandleID="k8s-pod-network.8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Workload="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:21.600696 env[1210]: 2024-02-09 19:46:21.597 [INFO][4883] ipam_plugin.go 443: Releasing address using workloadID ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" HandleID="k8s-pod-network.8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Workload="localhost-k8s-csi--node--driver--ngqgb-eth0" Feb 9 19:46:21.600696 env[1210]: 2024-02-09 19:46:21.598 [INFO][4883] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:21.600696 env[1210]: 2024-02-09 19:46:21.599 [INFO][4876] k8s.go 591: Teardown processing complete. ContainerID="8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5" Feb 9 19:46:21.600696 env[1210]: time="2024-02-09T19:46:21.600659405Z" level=info msg="TearDown network for sandbox \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\" successfully" Feb 9 19:46:21.743636 env[1210]: time="2024-02-09T19:46:21.743582097Z" level=info msg="RemovePodSandbox \"8f437e1d65c89f1ac628ac60f083c2c12bd98564c062d3a4cb629e3ece959bf5\" returns successfully" Feb 9 19:46:21.744132 env[1210]: time="2024-02-09T19:46:21.744087164Z" level=info msg="StopPodSandbox for \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\"" Feb 9 19:46:21.806008 env[1210]: 2024-02-09 19:46:21.775 [WARNING][4905] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--rrzs2-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"5d3b013a-7876-459c-882d-e9f8c04bb711", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f", Pod:"coredns-787d4945fb-rrzs2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9316f713663", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:21.806008 env[1210]: 2024-02-09 19:46:21.775 [INFO][4905] k8s.go 578: Cleaning up netns ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:46:21.806008 env[1210]: 2024-02-09 19:46:21.775 [INFO][4905] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" iface="eth0" netns="" Feb 9 19:46:21.806008 env[1210]: 2024-02-09 19:46:21.775 [INFO][4905] k8s.go 585: Releasing IP address(es) ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:46:21.806008 env[1210]: 2024-02-09 19:46:21.775 [INFO][4905] utils.go 188: Calico CNI releasing IP address ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:46:21.806008 env[1210]: 2024-02-09 19:46:21.795 [INFO][4913] ipam_plugin.go 415: Releasing address using handleID ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" HandleID="k8s-pod-network.f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Workload="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:21.806008 env[1210]: 2024-02-09 19:46:21.795 [INFO][4913] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:21.806008 env[1210]: 2024-02-09 19:46:21.795 [INFO][4913] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:21.806008 env[1210]: 2024-02-09 19:46:21.801 [WARNING][4913] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" HandleID="k8s-pod-network.f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Workload="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:21.806008 env[1210]: 2024-02-09 19:46:21.801 [INFO][4913] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" HandleID="k8s-pod-network.f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Workload="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:21.806008 env[1210]: 2024-02-09 19:46:21.803 [INFO][4913] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:21.806008 env[1210]: 2024-02-09 19:46:21.804 [INFO][4905] k8s.go 591: Teardown processing complete. ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:46:21.806495 env[1210]: time="2024-02-09T19:46:21.805998992Z" level=info msg="TearDown network for sandbox \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\" successfully" Feb 9 19:46:21.806495 env[1210]: time="2024-02-09T19:46:21.806034889Z" level=info msg="StopPodSandbox for \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\" returns successfully" Feb 9 19:46:21.806591 env[1210]: time="2024-02-09T19:46:21.806550807Z" level=info msg="RemovePodSandbox for \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\"" Feb 9 19:46:21.806624 env[1210]: time="2024-02-09T19:46:21.806594069Z" level=info msg="Forcibly stopping sandbox \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\"" Feb 9 19:46:21.890989 env[1210]: 2024-02-09 19:46:21.865 [WARNING][4935] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--787d4945fb--rrzs2-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"5d3b013a-7876-459c-882d-e9f8c04bb711", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4499fd25efbf27f55142294905603dd164f29b0d645aebbd496a0389e974988f", Pod:"coredns-787d4945fb-rrzs2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9316f713663", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:46:21.890989 env[1210]: 2024-02-09 19:46:21.865 [INFO][4935] k8s.go 578: Cleaning up netns ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:46:21.890989 env[1210]: 2024-02-09 19:46:21.865 [INFO][4935] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" iface="eth0" netns="" Feb 9 19:46:21.890989 env[1210]: 2024-02-09 19:46:21.865 [INFO][4935] k8s.go 585: Releasing IP address(es) ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:46:21.890989 env[1210]: 2024-02-09 19:46:21.865 [INFO][4935] utils.go 188: Calico CNI releasing IP address ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:46:21.890989 env[1210]: 2024-02-09 19:46:21.880 [INFO][4942] ipam_plugin.go 415: Releasing address using handleID ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" HandleID="k8s-pod-network.f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Workload="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:21.890989 env[1210]: 2024-02-09 19:46:21.881 [INFO][4942] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:46:21.890989 env[1210]: 2024-02-09 19:46:21.881 [INFO][4942] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:46:21.890989 env[1210]: 2024-02-09 19:46:21.886 [WARNING][4942] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" HandleID="k8s-pod-network.f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Workload="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:21.890989 env[1210]: 2024-02-09 19:46:21.886 [INFO][4942] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" HandleID="k8s-pod-network.f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Workload="localhost-k8s-coredns--787d4945fb--rrzs2-eth0" Feb 9 19:46:21.890989 env[1210]: 2024-02-09 19:46:21.888 [INFO][4942] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:46:21.890989 env[1210]: 2024-02-09 19:46:21.889 [INFO][4935] k8s.go 591: Teardown processing complete. ContainerID="f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b" Feb 9 19:46:21.890989 env[1210]: time="2024-02-09T19:46:21.890938726Z" level=info msg="TearDown network for sandbox \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\" successfully" Feb 9 19:46:21.928942 env[1210]: time="2024-02-09T19:46:21.928906208Z" level=info msg="RemovePodSandbox \"f6b44a0f265ce36c68708cc3566e4d22b13942e311535816757845b0142f930b\" returns successfully" Feb 9 19:46:21.969211 env[1210]: time="2024-02-09T19:46:21.969159667Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:21.970887 env[1210]: time="2024-02-09T19:46:21.970866459Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:21.973016 env[1210]: time="2024-02-09T19:46:21.972979472Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:21.974946 env[1210]: time="2024-02-09T19:46:21.974909974Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:46:21.975717 env[1210]: time="2024-02-09T19:46:21.975684357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:4e87edec0297dadd6f3bb25b2f540fd40e2abed9fff582c97ff4cd751d3f9803\"" Feb 9 19:46:21.981237 env[1210]: time="2024-02-09T19:46:21.981203609Z" level=info msg="CreateContainer within sandbox \"a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 19:46:21.992592 env[1210]: time="2024-02-09T19:46:21.992555534Z" level=info msg="CreateContainer within sandbox \"a1bd5a7b380ef95d5504753da074c3fd4e259048dfe7b329c5000ed79e31219d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"613530c519232a0363bccba1fa6a5f8794d22c9c2732b5bda9af0e5db98fd1ab\"" Feb 9 19:46:21.993050 env[1210]: time="2024-02-09T19:46:21.993010517Z" level=info msg="StartContainer for \"613530c519232a0363bccba1fa6a5f8794d22c9c2732b5bda9af0e5db98fd1ab\"" Feb 9 19:46:22.044965 env[1210]: time="2024-02-09T19:46:22.044918787Z" level=info msg="StartContainer for \"613530c519232a0363bccba1fa6a5f8794d22c9c2732b5bda9af0e5db98fd1ab\" returns successfully" Feb 9 19:46:22.277085 kubelet[2166]: I0209 19:46:22.276690 2166 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-ngqgb" podStartSLOduration=-9.223371994578121e+09 pod.CreationTimestamp="2024-02-09 19:45:40 +0000 UTC" firstStartedPulling="2024-02-09 19:46:10.448784816 +0000 UTC m=+49.541943112" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:16.271425582 +0000 UTC m=+55.364583888" watchObservedRunningTime="2024-02-09 19:46:22.276654185 +0000 UTC m=+61.369812491" Feb 9 19:46:22.284540 kubelet[2166]: I0209 19:46:22.284508 2166 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54744cf5f8-xqcbp" podStartSLOduration=-9.223371994570307e+09 pod.CreationTimestamp="2024-02-09 19:45:40 +0000 UTC" firstStartedPulling="2024-02-09 19:46:14.275478833 +0000 UTC m=+53.368637129" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:46:22.27653955 +0000 UTC m=+61.369697857" watchObservedRunningTime="2024-02-09 19:46:22.284468493 +0000 UTC m=+61.377626799" Feb 9 19:46:23.142773 systemd[1]: Started sshd@12-10.0.0.60:22-10.0.0.1:48310.service. Feb 9 19:46:23.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.60:22-10.0.0.1:48310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:23.143660 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:46:23.143721 kernel: audit: type=1130 audit(1707507983.141:364): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.60:22-10.0.0.1:48310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:23.180000 audit[5009]: USER_ACCT pid=5009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.181695 sshd[5009]: Accepted publickey for core from 10.0.0.1 port 48310 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:23.184221 sshd[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:23.184562 kernel: audit: type=1101 audit(1707507983.180:365): pid=5009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.184595 kernel: audit: type=1103 audit(1707507983.182:366): pid=5009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.182000 audit[5009]: CRED_ACQ pid=5009 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.187651 systemd-logind[1192]: New session 13 of user core. Feb 9 19:46:23.188338 systemd[1]: Started session-13.scope. Feb 9 19:46:23.188628 kernel: audit: type=1006 audit(1707507983.182:367): pid=5009 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Feb 9 19:46:23.191409 kernel: audit: type=1300 audit(1707507983.182:367): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce5471740 a2=3 a3=0 items=0 ppid=1 pid=5009 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:23.182000 audit[5009]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffce5471740 a2=3 a3=0 items=0 ppid=1 pid=5009 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:23.182000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:23.192430 kernel: audit: type=1327 audit(1707507983.182:367): proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:23.191000 audit[5009]: USER_START pid=5009 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.196416 kernel: audit: type=1105 audit(1707507983.191:368): pid=5009 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.196463 kernel: audit: type=1103 audit(1707507983.192:369): pid=5012 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.192000 audit[5012]: CRED_ACQ pid=5012 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.292813 sshd[5009]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:23.292000 audit[5009]: USER_END pid=5009 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.295087 systemd[1]: Started sshd@13-10.0.0.60:22-10.0.0.1:48316.service. Feb 9 19:46:23.295522 systemd[1]: sshd@12-10.0.0.60:22-10.0.0.1:48310.service: Deactivated successfully. Feb 9 19:46:23.296219 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:46:23.292000 audit[5009]: CRED_DISP pid=5009 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.299480 kernel: audit: type=1106 audit(1707507983.292:370): pid=5009 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.299536 kernel: audit: type=1104 audit(1707507983.292:371): pid=5009 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.60:22-10.0.0.1:48316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:23.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.60:22-10.0.0.1:48310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:23.300419 systemd-logind[1192]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:46:23.301128 systemd-logind[1192]: Removed session 13. Feb 9 19:46:23.330000 audit[5022]: USER_ACCT pid=5022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.332025 sshd[5022]: Accepted publickey for core from 10.0.0.1 port 48316 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:23.331000 audit[5022]: CRED_ACQ pid=5022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.331000 audit[5022]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd574693a0 a2=3 a3=0 items=0 ppid=1 pid=5022 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:23.331000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:23.332902 sshd[5022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:23.336080 systemd-logind[1192]: New session 14 of user core. Feb 9 19:46:23.336834 systemd[1]: Started session-14.scope. Feb 9 19:46:23.340000 audit[5022]: USER_START pid=5022 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:23.341000 audit[5027]: CRED_ACQ pid=5027 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:24.220171 sshd[5022]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:24.222000 audit[5022]: USER_END pid=5022 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:24.222000 audit[5022]: CRED_DISP pid=5022 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:24.230424 systemd[1]: Started sshd@14-10.0.0.60:22-10.0.0.1:48328.service. Feb 9 19:46:24.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.60:22-10.0.0.1:48328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:24.235162 systemd[1]: sshd@13-10.0.0.60:22-10.0.0.1:48316.service: Deactivated successfully. Feb 9 19:46:24.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.60:22-10.0.0.1:48316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:24.236534 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:46:24.237034 systemd-logind[1192]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:46:24.241227 systemd-logind[1192]: Removed session 14. Feb 9 19:46:24.275000 audit[5036]: USER_ACCT pid=5036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:24.278004 sshd[5036]: Accepted publickey for core from 10.0.0.1 port 48328 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:24.277000 audit[5036]: CRED_ACQ pid=5036 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:24.277000 audit[5036]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc35345b00 a2=3 a3=0 items=0 ppid=1 pid=5036 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:24.277000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:24.279130 sshd[5036]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:24.284043 systemd-logind[1192]: New session 15 of user core. Feb 9 19:46:24.285077 systemd[1]: Started session-15.scope. Feb 9 19:46:24.293000 audit[5036]: USER_START pid=5036 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:24.294000 audit[5041]: CRED_ACQ pid=5041 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:24.399213 sshd[5036]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:24.398000 audit[5036]: USER_END pid=5036 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:24.398000 audit[5036]: CRED_DISP pid=5036 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:24.401275 systemd[1]: sshd@14-10.0.0.60:22-10.0.0.1:48328.service: Deactivated successfully. Feb 9 19:46:24.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.60:22-10.0.0.1:48328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:24.402116 systemd-logind[1192]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:46:24.402145 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:46:24.402782 systemd-logind[1192]: Removed session 15. Feb 9 19:46:29.402464 systemd[1]: Started sshd@15-10.0.0.60:22-10.0.0.1:39310.service. Feb 9 19:46:29.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.60:22-10.0.0.1:39310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:29.406289 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 19:46:29.406351 kernel: audit: type=1130 audit(1707507989.402:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.60:22-10.0.0.1:39310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:29.440000 audit[5104]: USER_ACCT pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:29.441072 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 39310 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:29.442861 sshd[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:29.442000 audit[5104]: CRED_ACQ pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:29.446662 systemd-logind[1192]: New session 16 of user core. Feb 9 19:46:29.447564 kernel: audit: type=1101 audit(1707507989.440:392): pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:29.447747 kernel: audit: type=1103 audit(1707507989.442:393): pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:29.447775 kernel: audit: type=1006 audit(1707507989.442:394): pid=5104 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Feb 9 19:46:29.447626 systemd[1]: Started session-16.scope. Feb 9 19:46:29.449461 kernel: audit: type=1300 audit(1707507989.442:394): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff80eadf0 a2=3 a3=0 items=0 ppid=1 pid=5104 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:29.442000 audit[5104]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff80eadf0 a2=3 a3=0 items=0 ppid=1 pid=5104 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:29.442000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:29.453874 kernel: audit: type=1327 audit(1707507989.442:394): proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:29.453913 kernel: audit: type=1105 audit(1707507989.452:395): pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:29.452000 audit[5104]: USER_START pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:29.453000 audit[5107]: CRED_ACQ pid=5107 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:29.459301 kernel: audit: type=1103 audit(1707507989.453:396): pid=5107 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:29.547854 sshd[5104]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:29.548000 audit[5104]: USER_END pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:29.549945 systemd[1]: sshd@15-10.0.0.60:22-10.0.0.1:39310.service: Deactivated successfully. Feb 9 19:46:29.550662 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:46:29.548000 audit[5104]: CRED_DISP pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:29.553206 systemd-logind[1192]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:46:29.553892 systemd-logind[1192]: Removed session 16. Feb 9 19:46:29.555963 kernel: audit: type=1106 audit(1707507989.548:397): pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:29.556023 kernel: audit: type=1104 audit(1707507989.548:398): pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:29.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.60:22-10.0.0.1:39310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:34.551852 systemd[1]: Started sshd@16-10.0.0.60:22-10.0.0.1:39326.service. Feb 9 19:46:34.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.60:22-10.0.0.1:39326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:34.552955 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:46:34.553018 kernel: audit: type=1130 audit(1707507994.550:400): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.60:22-10.0.0.1:39326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:34.589000 audit[5140]: USER_ACCT pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:34.591284 sshd[5140]: Accepted publickey for core from 10.0.0.1 port 39326 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:34.593849 sshd[5140]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:34.592000 audit[5140]: CRED_ACQ pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:34.596581 kernel: audit: type=1101 audit(1707507994.589:401): pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:34.596639 kernel: audit: type=1103 audit(1707507994.592:402): pid=5140 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:34.596669 kernel: audit: type=1006 audit(1707507994.592:403): pid=5140 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 9 19:46:34.598549 kernel: audit: type=1300 audit(1707507994.592:403): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec8b41770 a2=3 a3=0 items=0 ppid=1 pid=5140 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:34.592000 audit[5140]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec8b41770 a2=3 a3=0 items=0 ppid=1 pid=5140 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:34.598190 systemd-logind[1192]: New session 17 of user core. Feb 9 19:46:34.599177 systemd[1]: Started session-17.scope. Feb 9 19:46:34.601981 kernel: audit: type=1327 audit(1707507994.592:403): proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:34.592000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:34.603000 audit[5140]: USER_START pid=5140 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:34.604000 audit[5143]: CRED_ACQ pid=5143 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:34.611539 kernel: audit: type=1105 audit(1707507994.603:404): pid=5140 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:34.611603 kernel: audit: type=1103 audit(1707507994.604:405): pid=5143 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:34.708056 sshd[5140]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:34.707000 audit[5140]: USER_END pid=5140 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:34.710727 systemd[1]: sshd@16-10.0.0.60:22-10.0.0.1:39326.service: Deactivated successfully. Feb 9 19:46:34.711482 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:46:34.707000 audit[5140]: CRED_DISP pid=5140 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:34.712570 systemd-logind[1192]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:46:34.713514 systemd-logind[1192]: Removed session 17. Feb 9 19:46:34.714687 kernel: audit: type=1106 audit(1707507994.707:406): pid=5140 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:34.714765 kernel: audit: type=1104 audit(1707507994.707:407): pid=5140 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:34.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.60:22-10.0.0.1:39326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:39.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.60:22-10.0.0.1:42924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:39.712080 systemd[1]: Started sshd@17-10.0.0.60:22-10.0.0.1:42924.service. Feb 9 19:46:39.712990 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:46:39.713040 kernel: audit: type=1130 audit(1707507999.710:409): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.60:22-10.0.0.1:42924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:39.747000 audit[5157]: USER_ACCT pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:39.748912 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 42924 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:39.750000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:39.751865 sshd[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:39.754002 kernel: audit: type=1101 audit(1707507999.747:410): pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:39.754047 kernel: audit: type=1103 audit(1707507999.750:411): pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:39.754064 kernel: audit: type=1006 audit(1707507999.750:412): pid=5157 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Feb 9 19:46:39.755068 systemd-logind[1192]: New session 18 of user core. Feb 9 19:46:39.755662 kernel: audit: type=1300 audit(1707507999.750:412): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2facdcf0 a2=3 a3=0 items=0 ppid=1 pid=5157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:39.750000 audit[5157]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2facdcf0 a2=3 a3=0 items=0 ppid=1 pid=5157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:39.755783 systemd[1]: Started session-18.scope. Feb 9 19:46:39.750000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:39.759096 kernel: audit: type=1327 audit(1707507999.750:412): proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:39.758000 audit[5157]: USER_START pid=5157 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:39.759000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:39.764956 kernel: audit: type=1105 audit(1707507999.758:413): pid=5157 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:39.765000 kernel: audit: type=1103 audit(1707507999.759:414): pid=5160 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:39.887912 sshd[5157]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:39.887000 audit[5157]: USER_END pid=5157 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:39.890299 systemd[1]: sshd@17-10.0.0.60:22-10.0.0.1:42924.service: Deactivated successfully. Feb 9 19:46:39.891019 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:46:39.887000 audit[5157]: CRED_DISP pid=5157 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:39.892900 systemd-logind[1192]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:46:39.893882 systemd-logind[1192]: Removed session 18. Feb 9 19:46:39.894098 kernel: audit: type=1106 audit(1707507999.887:415): pid=5157 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:39.894154 kernel: audit: type=1104 audit(1707507999.887:416): pid=5157 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:39.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.60:22-10.0.0.1:42924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:42.029277 kubelet[2166]: E0209 19:46:42.029234 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:44.028715 kubelet[2166]: E0209 19:46:44.028657 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:44.891780 systemd[1]: Started sshd@18-10.0.0.60:22-10.0.0.1:42936.service. Feb 9 19:46:44.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.60:22-10.0.0.1:42936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:44.892699 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:46:44.892825 kernel: audit: type=1130 audit(1707508004.890:418): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.60:22-10.0.0.1:42936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:45.096000 audit[5171]: USER_ACCT pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:45.100889 sshd[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:45.099000 audit[5171]: CRED_ACQ pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:45.101789 sshd[5171]: Accepted publickey for core from 10.0.0.1 port 42936 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:45.104738 kernel: audit: type=1101 audit(1707508005.096:419): pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:45.104837 kernel: audit: type=1103 audit(1707508005.099:420): pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:45.104863 kernel: audit: type=1006 audit(1707508005.099:421): pid=5171 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Feb 9 19:46:45.099000 audit[5171]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc59fbb720 a2=3 a3=0 items=0 ppid=1 pid=5171 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:45.107585 systemd[1]: Started session-19.scope. Feb 9 19:46:45.107813 systemd-logind[1192]: New session 19 of user core. Feb 9 19:46:45.110633 kernel: audit: type=1300 audit(1707508005.099:421): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc59fbb720 a2=3 a3=0 items=0 ppid=1 pid=5171 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:45.110748 kernel: audit: type=1327 audit(1707508005.099:421): proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:45.099000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:45.111000 audit[5171]: USER_START pid=5171 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:45.112000 audit[5174]: CRED_ACQ pid=5174 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:45.118453 kernel: audit: type=1105 audit(1707508005.111:422): pid=5171 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:45.118512 kernel: audit: type=1103 audit(1707508005.112:423): pid=5174 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:45.211759 sshd[5171]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:45.211000 audit[5171]: USER_END pid=5171 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:45.214431 systemd[1]: sshd@18-10.0.0.60:22-10.0.0.1:42936.service: Deactivated successfully. Feb 9 19:46:45.215173 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:46:45.216055 systemd-logind[1192]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:46:45.211000 audit[5171]: CRED_DISP pid=5171 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:45.216854 systemd-logind[1192]: Removed session 19. Feb 9 19:46:45.219415 kernel: audit: type=1106 audit(1707508005.211:424): pid=5171 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:45.219475 kernel: audit: type=1104 audit(1707508005.211:425): pid=5171 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:45.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.60:22-10.0.0.1:42936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:50.028656 kubelet[2166]: E0209 19:46:50.028621 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:50.215576 systemd[1]: Started sshd@19-10.0.0.60:22-10.0.0.1:39654.service. Feb 9 19:46:50.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.60:22-10.0.0.1:39654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:50.219111 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:46:50.219219 kernel: audit: type=1130 audit(1707508010.214:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.60:22-10.0.0.1:39654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:50.250000 audit[5192]: USER_ACCT pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.251999 sshd[5192]: Accepted publickey for core from 10.0.0.1 port 39654 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:50.253851 sshd[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:50.252000 audit[5192]: CRED_ACQ pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.257106 systemd-logind[1192]: New session 20 of user core. Feb 9 19:46:50.257513 kernel: audit: type=1101 audit(1707508010.250:428): pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.257541 kernel: audit: type=1103 audit(1707508010.252:429): pid=5192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.257792 systemd[1]: Started session-20.scope. Feb 9 19:46:50.259308 kernel: audit: type=1006 audit(1707508010.252:430): pid=5192 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Feb 9 19:46:50.259416 kernel: audit: type=1300 audit(1707508010.252:430): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfae79140 a2=3 a3=0 items=0 ppid=1 pid=5192 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:50.252000 audit[5192]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfae79140 a2=3 a3=0 items=0 ppid=1 pid=5192 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:50.262750 kernel: audit: type=1327 audit(1707508010.252:430): proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:50.252000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:50.260000 audit[5192]: USER_START pid=5192 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.266821 kernel: audit: type=1105 audit(1707508010.260:431): pid=5192 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.266885 kernel: audit: type=1103 audit(1707508010.261:432): pid=5195 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.261000 audit[5195]: CRED_ACQ pid=5195 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.360894 sshd[5192]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:50.363202 systemd[1]: Started sshd@20-10.0.0.60:22-10.0.0.1:39658.service. Feb 9 19:46:50.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.60:22-10.0.0.1:39658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:50.362000 audit[5192]: USER_END pid=5192 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.366135 systemd[1]: sshd@19-10.0.0.60:22-10.0.0.1:39654.service: Deactivated successfully. Feb 9 19:46:50.367624 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:46:50.368343 systemd-logind[1192]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:46:50.368663 kernel: audit: type=1130 audit(1707508010.361:433): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.60:22-10.0.0.1:39658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:50.368699 kernel: audit: type=1106 audit(1707508010.362:434): pid=5192 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.363000 audit[5192]: CRED_DISP pid=5192 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.60:22-10.0.0.1:39654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:50.369145 systemd-logind[1192]: Removed session 20. Feb 9 19:46:50.397000 audit[5204]: USER_ACCT pid=5204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.399531 sshd[5204]: Accepted publickey for core from 10.0.0.1 port 39658 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:50.398000 audit[5204]: CRED_ACQ pid=5204 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.398000 audit[5204]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc76b86d60 a2=3 a3=0 items=0 ppid=1 pid=5204 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:50.398000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:50.400371 sshd[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:50.403337 systemd-logind[1192]: New session 21 of user core. Feb 9 19:46:50.404057 systemd[1]: Started session-21.scope. Feb 9 19:46:50.406000 audit[5204]: USER_START pid=5204 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.407000 audit[5209]: CRED_ACQ pid=5209 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.770725 sshd[5204]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:50.770000 audit[5204]: USER_END pid=5204 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.771000 audit[5204]: CRED_DISP pid=5204 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.60:22-10.0.0.1:39662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:50.773037 systemd[1]: Started sshd@21-10.0.0.60:22-10.0.0.1:39662.service. Feb 9 19:46:50.773973 systemd[1]: sshd@20-10.0.0.60:22-10.0.0.1:39658.service: Deactivated successfully. Feb 9 19:46:50.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.60:22-10.0.0.1:39658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:50.774983 systemd-logind[1192]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:46:50.775034 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:46:50.775993 systemd-logind[1192]: Removed session 21. Feb 9 19:46:50.809000 audit[5217]: USER_ACCT pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.810884 sshd[5217]: Accepted publickey for core from 10.0.0.1 port 39662 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:50.809000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.809000 audit[5217]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe8a85040 a2=3 a3=0 items=0 ppid=1 pid=5217 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:50.809000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:50.811575 sshd[5217]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:50.815149 systemd-logind[1192]: New session 22 of user core. Feb 9 19:46:50.815806 systemd[1]: Started session-22.scope. Feb 9 19:46:50.819000 audit[5217]: USER_START pid=5217 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:50.821000 audit[5222]: CRED_ACQ pid=5222 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:51.826509 sshd[5217]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:51.828132 systemd[1]: Started sshd@22-10.0.0.60:22-10.0.0.1:39664.service. Feb 9 19:46:51.827000 audit[5217]: USER_END pid=5217 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:51.827000 audit[5217]: CRED_DISP pid=5217 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:51.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.60:22-10.0.0.1:39664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:51.831935 systemd[1]: sshd@21-10.0.0.60:22-10.0.0.1:39662.service: Deactivated successfully. Feb 9 19:46:51.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.60:22-10.0.0.1:39662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:51.833352 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:46:51.833950 systemd-logind[1192]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:46:51.834882 systemd-logind[1192]: Removed session 22. Feb 9 19:46:51.865000 audit[5264]: NETFILTER_CFG table=filter:127 family=2 entries=18 op=nft_register_rule pid=5264 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:51.865000 audit[5264]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffff22cb630 a2=0 a3=7ffff22cb61c items=0 ppid=2345 pid=5264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:51.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:51.866000 audit[5249]: USER_ACCT pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:51.866673 sshd[5249]: Accepted publickey for core from 10.0.0.1 port 39664 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:51.867000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:51.867000 audit[5249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2072f490 a2=3 a3=0 items=0 ppid=1 pid=5249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:51.867000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:51.867986 sshd[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:51.866000 audit[5264]: NETFILTER_CFG table=nat:128 family=2 entries=78 op=nft_register_rule pid=5264 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:51.866000 audit[5264]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffff22cb630 a2=0 a3=7ffff22cb61c items=0 ppid=2345 pid=5264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:51.866000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:51.872059 systemd-logind[1192]: New session 23 of user core. Feb 9 19:46:51.872298 systemd[1]: Started session-23.scope. Feb 9 19:46:51.876000 audit[5249]: USER_START pid=5249 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:51.878000 audit[5272]: CRED_ACQ pid=5272 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:51.901000 audit[5292]: NETFILTER_CFG table=filter:129 family=2 entries=30 op=nft_register_rule pid=5292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:51.901000 audit[5292]: SYSCALL arch=c000003e syscall=46 success=yes exit=10364 a0=3 a1=7ffe14ba9ba0 a2=0 a3=7ffe14ba9b8c items=0 ppid=2345 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:51.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:51.902000 audit[5292]: NETFILTER_CFG table=nat:130 family=2 entries=78 op=nft_register_rule pid=5292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:51.902000 audit[5292]: SYSCALL arch=c000003e syscall=46 success=yes exit=24988 a0=3 a1=7ffe14ba9ba0 a2=0 a3=7ffe14ba9b8c items=0 ppid=2345 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:51.902000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:52.106805 sshd[5249]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:52.108000 audit[5249]: USER_END pid=5249 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:52.108924 systemd[1]: Started sshd@23-10.0.0.60:22-10.0.0.1:39670.service. Feb 9 19:46:52.108000 audit[5249]: CRED_DISP pid=5249 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:52.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.60:22-10.0.0.1:39670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:52.109891 systemd[1]: sshd@22-10.0.0.60:22-10.0.0.1:39664.service: Deactivated successfully. Feb 9 19:46:52.109000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.60:22-10.0.0.1:39664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:52.111218 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:46:52.111898 systemd-logind[1192]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:46:52.112842 systemd-logind[1192]: Removed session 23. Feb 9 19:46:52.147000 audit[5299]: USER_ACCT pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:52.147795 sshd[5299]: Accepted publickey for core from 10.0.0.1 port 39670 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:52.148000 audit[5299]: CRED_ACQ pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:52.148000 audit[5299]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb137e850 a2=3 a3=0 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:52.148000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:52.148683 sshd[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:52.151544 systemd-logind[1192]: New session 24 of user core. Feb 9 19:46:52.152494 systemd[1]: Started session-24.scope. Feb 9 19:46:52.155000 audit[5299]: USER_START pid=5299 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:52.156000 audit[5304]: CRED_ACQ pid=5304 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:52.248719 sshd[5299]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:52.249000 audit[5299]: USER_END pid=5299 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:52.249000 audit[5299]: CRED_DISP pid=5299 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:52.250870 systemd[1]: sshd@23-10.0.0.60:22-10.0.0.1:39670.service: Deactivated successfully. Feb 9 19:46:52.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.60:22-10.0.0.1:39670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:52.252040 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:46:52.252042 systemd-logind[1192]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:46:52.252881 systemd-logind[1192]: Removed session 24. Feb 9 19:46:57.251927 systemd[1]: Started sshd@24-10.0.0.60:22-10.0.0.1:39676.service. Feb 9 19:46:57.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.60:22-10.0.0.1:39676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:57.252859 kernel: kauditd_printk_skb: 57 callbacks suppressed Feb 9 19:46:57.252902 kernel: audit: type=1130 audit(1707508017.251:476): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.60:22-10.0.0.1:39676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:57.287000 audit[5317]: USER_ACCT pid=5317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:57.287946 sshd[5317]: Accepted publickey for core from 10.0.0.1 port 39676 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:46:57.289747 sshd[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:46:57.289000 audit[5317]: CRED_ACQ pid=5317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:57.292841 kernel: audit: type=1101 audit(1707508017.287:477): pid=5317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:57.292888 kernel: audit: type=1103 audit(1707508017.289:478): pid=5317 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:57.292904 kernel: audit: type=1006 audit(1707508017.289:479): pid=5317 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 9 19:46:57.293057 systemd-logind[1192]: New session 25 of user core. Feb 9 19:46:57.293729 systemd[1]: Started session-25.scope. Feb 9 19:46:57.296983 kernel: audit: type=1300 audit(1707508017.289:479): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1d8f6ea0 a2=3 a3=0 items=0 ppid=1 pid=5317 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:57.289000 audit[5317]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1d8f6ea0 a2=3 a3=0 items=0 ppid=1 pid=5317 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:57.289000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:57.298430 kernel: audit: type=1327 audit(1707508017.289:479): proctitle=737368643A20636F7265205B707269765D Feb 9 19:46:57.298457 kernel: audit: type=1105 audit(1707508017.297:480): pid=5317 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:57.297000 audit[5317]: USER_START pid=5317 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:57.298000 audit[5320]: CRED_ACQ pid=5320 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:57.303637 kernel: audit: type=1103 audit(1707508017.298:481): pid=5320 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:57.538327 sshd[5317]: pam_unix(sshd:session): session closed for user core Feb 9 19:46:57.538000 audit[5317]: USER_END pid=5317 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:57.540811 systemd[1]: sshd@24-10.0.0.60:22-10.0.0.1:39676.service: Deactivated successfully. Feb 9 19:46:57.541838 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:46:57.542204 systemd-logind[1192]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:46:57.538000 audit[5317]: CRED_DISP pid=5317 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:57.542825 systemd-logind[1192]: Removed session 25. Feb 9 19:46:57.545151 kernel: audit: type=1106 audit(1707508017.538:482): pid=5317 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:57.545210 kernel: audit: type=1104 audit(1707508017.538:483): pid=5317 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:46:57.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.60:22-10.0.0.1:39676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:46:59.001181 kubelet[2166]: E0209 19:46:59.001154 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:46:59.460000 audit[5398]: NETFILTER_CFG table=filter:131 family=2 entries=18 op=nft_register_rule pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:59.460000 audit[5398]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffdf0f66890 a2=0 a3=7ffdf0f6687c items=0 ppid=2345 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:59.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:46:59.463000 audit[5398]: NETFILTER_CFG table=nat:132 family=2 entries=162 op=nft_register_chain pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:46:59.463000 audit[5398]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffdf0f66890 a2=0 a3=7ffdf0f6687c items=0 ppid=2345 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:46:59.463000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:00.028647 kubelet[2166]: E0209 19:47:00.028615 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:00.339798 kubelet[2166]: I0209 19:47:00.339686 2166 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:47:00.371000 audit[5426]: NETFILTER_CFG table=filter:133 family=2 entries=7 op=nft_register_rule pid=5426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:00.371000 audit[5426]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffc3c0e98b0 a2=0 a3=7ffc3c0e989c items=0 ppid=2345 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:00.371000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:00.372349 kubelet[2166]: I0209 19:47:00.372324 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d35ca669-abc2-424e-b252-f116ccb1a958-calico-apiserver-certs\") pod \"calico-apiserver-7777497956-wv9j8\" (UID: \"d35ca669-abc2-424e-b252-f116ccb1a958\") " pod="calico-apiserver/calico-apiserver-7777497956-wv9j8" Feb 9 19:47:00.372452 kubelet[2166]: I0209 19:47:00.372373 2166 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hw4x\" (UniqueName: \"kubernetes.io/projected/d35ca669-abc2-424e-b252-f116ccb1a958-kube-api-access-5hw4x\") pod \"calico-apiserver-7777497956-wv9j8\" (UID: \"d35ca669-abc2-424e-b252-f116ccb1a958\") " pod="calico-apiserver/calico-apiserver-7777497956-wv9j8" Feb 9 19:47:00.374000 audit[5426]: NETFILTER_CFG table=nat:134 family=2 entries=198 op=nft_register_rule pid=5426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:00.374000 audit[5426]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffc3c0e98b0 a2=0 a3=7ffc3c0e989c items=0 ppid=2345 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:00.374000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:00.473681 kubelet[2166]: E0209 19:47:00.473413 2166 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 19:47:00.475102 kubelet[2166]: E0209 19:47:00.475052 2166 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d35ca669-abc2-424e-b252-f116ccb1a958-calico-apiserver-certs podName:d35ca669-abc2-424e-b252-f116ccb1a958 nodeName:}" failed. No retries permitted until 2024-02-09 19:47:00.973934174 +0000 UTC m=+100.067092480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d35ca669-abc2-424e-b252-f116ccb1a958-calico-apiserver-certs") pod "calico-apiserver-7777497956-wv9j8" (UID: "d35ca669-abc2-424e-b252-f116ccb1a958") : secret "calico-apiserver-certs" not found Feb 9 19:47:01.245202 env[1210]: time="2024-02-09T19:47:01.245151932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7777497956-wv9j8,Uid:d35ca669-abc2-424e-b252-f116ccb1a958,Namespace:calico-apiserver,Attempt:0,}" Feb 9 19:47:01.407000 audit[5454]: NETFILTER_CFG table=filter:135 family=2 entries=8 op=nft_register_rule pid=5454 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:01.407000 audit[5454]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe522760b0 a2=0 a3=7ffe5227609c items=0 ppid=2345 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:01.407000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:01.410000 audit[5454]: NETFILTER_CFG table=nat:136 family=2 entries=198 op=nft_register_rule pid=5454 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:01.410000 audit[5454]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffe522760b0 a2=0 a3=7ffe5227609c items=0 ppid=2345 pid=5454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:01.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:01.511795 systemd-networkd[1085]: calib57db5c40ca: Link UP Feb 9 19:47:01.512525 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:47:01.512561 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib57db5c40ca: link becomes ready Feb 9 19:47:01.513620 systemd-networkd[1085]: calib57db5c40ca: Gained carrier Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.440 [INFO][5455] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7777497956--wv9j8-eth0 calico-apiserver-7777497956- calico-apiserver d35ca669-abc2-424e-b252-f116ccb1a958 1217 0 2024-02-09 19:47:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7777497956 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7777497956-wv9j8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib57db5c40ca [] []}} ContainerID="1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-wv9j8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7777497956--wv9j8-" Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.441 [INFO][5455] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-wv9j8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7777497956--wv9j8-eth0" Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.467 [INFO][5469] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" HandleID="k8s-pod-network.1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" Workload="localhost-k8s-calico--apiserver--7777497956--wv9j8-eth0" Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.477 [INFO][5469] ipam_plugin.go 268: Auto assigning IP ContainerID="1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" HandleID="k8s-pod-network.1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" Workload="localhost-k8s-calico--apiserver--7777497956--wv9j8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000529d10), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7777497956-wv9j8", "timestamp":"2024-02-09 19:47:01.467555385 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.477 [INFO][5469] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.477 [INFO][5469] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.477 [INFO][5469] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.478 [INFO][5469] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" host="localhost" Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.482 [INFO][5469] ipam.go 372: Looking up existing affinities for host host="localhost" Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.486 [INFO][5469] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.488 [INFO][5469] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.490 [INFO][5469] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.490 [INFO][5469] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" host="localhost" Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.493 [INFO][5469] ipam.go 1682: Creating new handle: k8s-pod-network.1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1 Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.498 [INFO][5469] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" host="localhost" Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.504 [INFO][5469] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" host="localhost" Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.504 [INFO][5469] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" host="localhost" Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.504 [INFO][5469] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 19:47:01.525316 env[1210]: 2024-02-09 19:47:01.504 [INFO][5469] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" HandleID="k8s-pod-network.1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" Workload="localhost-k8s-calico--apiserver--7777497956--wv9j8-eth0" Feb 9 19:47:01.525892 env[1210]: 2024-02-09 19:47:01.507 [INFO][5455] k8s.go 385: Populated endpoint ContainerID="1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-wv9j8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7777497956--wv9j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7777497956--wv9j8-eth0", GenerateName:"calico-apiserver-7777497956-", Namespace:"calico-apiserver", SelfLink:"", UID:"d35ca669-abc2-424e-b252-f116ccb1a958", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 47, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7777497956", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7777497956-wv9j8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib57db5c40ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:47:01.525892 env[1210]: 2024-02-09 19:47:01.507 [INFO][5455] k8s.go 386: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-wv9j8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7777497956--wv9j8-eth0" Feb 9 19:47:01.525892 env[1210]: 2024-02-09 19:47:01.507 [INFO][5455] dataplane_linux.go 68: Setting the host side veth name to calib57db5c40ca ContainerID="1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-wv9j8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7777497956--wv9j8-eth0" Feb 9 19:47:01.525892 env[1210]: 2024-02-09 19:47:01.514 [INFO][5455] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-wv9j8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7777497956--wv9j8-eth0" Feb 9 19:47:01.525892 env[1210]: 2024-02-09 19:47:01.514 [INFO][5455] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-wv9j8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7777497956--wv9j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7777497956--wv9j8-eth0", GenerateName:"calico-apiserver-7777497956-", Namespace:"calico-apiserver", SelfLink:"", UID:"d35ca669-abc2-424e-b252-f116ccb1a958", ResourceVersion:"1217", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 19, 47, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7777497956", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1", Pod:"calico-apiserver-7777497956-wv9j8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib57db5c40ca", MAC:"86:02:f4:ad:6d:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 19:47:01.525892 env[1210]: 2024-02-09 19:47:01.521 [INFO][5455] k8s.go 491: Wrote updated endpoint to datastore ContainerID="1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1" Namespace="calico-apiserver" Pod="calico-apiserver-7777497956-wv9j8" WorkloadEndpoint="localhost-k8s-calico--apiserver--7777497956--wv9j8-eth0" Feb 9 19:47:01.545334 env[1210]: time="2024-02-09T19:47:01.545154038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:47:01.545334 env[1210]: time="2024-02-09T19:47:01.545225653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:47:01.545334 env[1210]: time="2024-02-09T19:47:01.545236735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:47:01.545539 env[1210]: time="2024-02-09T19:47:01.545349849Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1 pid=5503 runtime=io.containerd.runc.v2 Feb 9 19:47:01.546000 audit[5509]: NETFILTER_CFG table=filter:137 family=2 entries=59 op=nft_register_chain pid=5509 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 19:47:01.546000 audit[5509]: SYSCALL arch=c000003e syscall=46 success=yes exit=29292 a0=3 a1=7ffc27135f30 a2=0 a3=7ffc27135f1c items=0 ppid=3797 pid=5509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:01.546000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 19:47:01.570231 systemd-resolved[1137]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 9 19:47:01.590807 env[1210]: time="2024-02-09T19:47:01.590744444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7777497956-wv9j8,Uid:d35ca669-abc2-424e-b252-f116ccb1a958,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1\"" Feb 9 19:47:01.592244 env[1210]: time="2024-02-09T19:47:01.592218540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 19:47:02.541250 systemd[1]: Started sshd@25-10.0.0.60:22-10.0.0.1:47352.service. Feb 9 19:47:02.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.60:22-10.0.0.1:47352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:47:02.544362 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 9 19:47:02.544465 kernel: audit: type=1130 audit(1707508022.540:492): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.60:22-10.0.0.1:47352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:47:02.577000 audit[5539]: USER_ACCT pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:02.578453 sshd[5539]: Accepted publickey for core from 10.0.0.1 port 47352 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:02.580000 audit[5539]: CRED_ACQ pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:02.581446 sshd[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:02.583634 kernel: audit: type=1101 audit(1707508022.577:493): pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:02.583797 kernel: audit: type=1103 audit(1707508022.580:494): pid=5539 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:02.583823 kernel: audit: type=1006 audit(1707508022.580:495): pid=5539 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Feb 9 19:47:02.585167 kernel: audit: type=1300 audit(1707508022.580:495): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff94ea9640 a2=3 a3=0 items=0 ppid=1 pid=5539 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:02.580000 audit[5539]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff94ea9640 a2=3 a3=0 items=0 ppid=1 pid=5539 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:02.584881 systemd-logind[1192]: New session 26 of user core. Feb 9 19:47:02.585596 systemd[1]: Started session-26.scope. Feb 9 19:47:02.580000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:47:02.588715 kernel: audit: type=1327 audit(1707508022.580:495): proctitle=737368643A20636F7265205B707269765D Feb 9 19:47:02.589000 audit[5539]: USER_START pid=5539 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:02.591000 audit[5542]: CRED_ACQ pid=5542 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:02.595367 kernel: audit: type=1105 audit(1707508022.589:496): pid=5539 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:02.595430 kernel: audit: type=1103 audit(1707508022.591:497): pid=5542 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:02.660532 systemd-networkd[1085]: calib57db5c40ca: Gained IPv6LL Feb 9 19:47:02.688607 sshd[5539]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:02.689000 audit[5539]: USER_END pid=5539 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:02.690911 systemd[1]: sshd@25-10.0.0.60:22-10.0.0.1:47352.service: Deactivated successfully. Feb 9 19:47:02.691920 systemd-logind[1192]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:47:02.692045 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:47:02.689000 audit[5539]: CRED_DISP pid=5539 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:02.692825 systemd-logind[1192]: Removed session 26. Feb 9 19:47:02.694878 kernel: audit: type=1106 audit(1707508022.689:498): pid=5539 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:02.694950 kernel: audit: type=1104 audit(1707508022.689:499): pid=5539 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:02.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.60:22-10.0.0.1:47352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:47:06.558778 env[1210]: time="2024-02-09T19:47:06.558707202Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:47:06.562527 env[1210]: time="2024-02-09T19:47:06.562479059Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:47:06.564809 env[1210]: time="2024-02-09T19:47:06.564782023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:47:06.566632 env[1210]: time="2024-02-09T19:47:06.566608704Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:47:06.567206 env[1210]: time="2024-02-09T19:47:06.567175288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:848c5b919e8d33dbad8c8c64aa6aec07c29cfe6e4f6312ceafc1641ea929f91a\"" Feb 9 19:47:06.569188 env[1210]: time="2024-02-09T19:47:06.569157033Z" level=info msg="CreateContainer within sandbox \"1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 19:47:06.578227 env[1210]: time="2024-02-09T19:47:06.578176304Z" level=info msg="CreateContainer within sandbox \"1ed544118d9f36e3f708e459998b9861acd2c6896cfb17502343d9f942423fb1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"34fef18c149c7b01572dfc44216be940ecae4cffc68f13fa81508fcf661fad27\"" Feb 9 19:47:06.580026 env[1210]: time="2024-02-09T19:47:06.578685569Z" level=info msg="StartContainer for \"34fef18c149c7b01572dfc44216be940ecae4cffc68f13fa81508fcf661fad27\"" Feb 9 19:47:06.599449 systemd[1]: run-containerd-runc-k8s.io-34fef18c149c7b01572dfc44216be940ecae4cffc68f13fa81508fcf661fad27-runc.VeRtcJ.mount: Deactivated successfully. Feb 9 19:47:06.635995 env[1210]: time="2024-02-09T19:47:06.635929169Z" level=info msg="StartContainer for \"34fef18c149c7b01572dfc44216be940ecae4cffc68f13fa81508fcf661fad27\" returns successfully" Feb 9 19:47:07.343000 audit[5621]: NETFILTER_CFG table=filter:138 family=2 entries=8 op=nft_register_rule pid=5621 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:07.343000 audit[5621]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffe55c82d20 a2=0 a3=7ffe55c82d0c items=0 ppid=2345 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:07.343000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:07.346000 audit[5621]: NETFILTER_CFG table=nat:139 family=2 entries=198 op=nft_register_rule pid=5621 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:07.346000 audit[5621]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffe55c82d20 a2=0 a3=7ffe55c82d0c items=0 ppid=2345 pid=5621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:07.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:07.473000 audit[5647]: NETFILTER_CFG table=filter:140 family=2 entries=8 op=nft_register_rule pid=5647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:07.473000 audit[5647]: SYSCALL arch=c000003e syscall=46 success=yes exit=2620 a0=3 a1=7ffdaba81fb0 a2=0 a3=7ffdaba81f9c items=0 ppid=2345 pid=5647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:07.473000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:07.475000 audit[5647]: NETFILTER_CFG table=nat:141 family=2 entries=198 op=nft_register_rule pid=5647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 19:47:07.475000 audit[5647]: SYSCALL arch=c000003e syscall=46 success=yes exit=66940 a0=3 a1=7ffdaba81fb0 a2=0 a3=7ffdaba81f9c items=0 ppid=2345 pid=5647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:07.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 19:47:07.692006 systemd[1]: Started sshd@26-10.0.0.60:22-10.0.0.1:47358.service. Feb 9 19:47:07.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.60:22-10.0.0.1:47358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:47:07.694492 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 9 19:47:07.694545 kernel: audit: type=1130 audit(1707508027.691:505): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.60:22-10.0.0.1:47358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:47:07.730000 audit[5648]: USER_ACCT pid=5648 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:07.731592 sshd[5648]: Accepted publickey for core from 10.0.0.1 port 47358 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:07.734044 sshd[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:07.733000 audit[5648]: CRED_ACQ pid=5648 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:07.736801 kernel: audit: type=1101 audit(1707508027.730:506): pid=5648 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:07.736861 kernel: audit: type=1103 audit(1707508027.733:507): pid=5648 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:07.736882 kernel: audit: type=1006 audit(1707508027.733:508): pid=5648 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Feb 9 19:47:07.737509 systemd-logind[1192]: New session 27 of user core. Feb 9 19:47:07.738194 systemd[1]: Started session-27.scope. Feb 9 19:47:07.733000 audit[5648]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd521aeb0 a2=3 a3=0 items=0 ppid=1 pid=5648 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:07.741053 kernel: audit: type=1300 audit(1707508027.733:508): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd521aeb0 a2=3 a3=0 items=0 ppid=1 pid=5648 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:07.741101 kernel: audit: type=1327 audit(1707508027.733:508): proctitle=737368643A20636F7265205B707269765D Feb 9 19:47:07.733000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:47:07.742000 audit[5648]: USER_START pid=5648 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:07.743000 audit[5651]: CRED_ACQ pid=5651 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:07.748175 kernel: audit: type=1105 audit(1707508027.742:509): pid=5648 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:07.748222 kernel: audit: type=1103 audit(1707508027.743:510): pid=5651 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:07.866840 sshd[5648]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:07.867000 audit[5648]: USER_END pid=5648 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:07.869110 systemd[1]: sshd@26-10.0.0.60:22-10.0.0.1:47358.service: Deactivated successfully. Feb 9 19:47:07.870168 systemd-logind[1192]: Session 27 logged out. Waiting for processes to exit. Feb 9 19:47:07.870170 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 19:47:07.867000 audit[5648]: CRED_DISP pid=5648 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:07.871112 systemd-logind[1192]: Removed session 27. Feb 9 19:47:07.873371 kernel: audit: type=1106 audit(1707508027.867:511): pid=5648 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:07.873438 kernel: audit: type=1104 audit(1707508027.867:512): pid=5648 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:07.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.60:22-10.0.0.1:47358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:47:09.028810 kubelet[2166]: E0209 19:47:09.028769 2166 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 9 19:47:12.870200 systemd[1]: Started sshd@27-10.0.0.60:22-10.0.0.1:51860.service. Feb 9 19:47:12.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.60:22-10.0.0.1:51860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:47:12.871413 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 19:47:12.871474 kernel: audit: type=1130 audit(1707508032.868:514): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.60:22-10.0.0.1:51860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:47:12.905000 audit[5665]: USER_ACCT pid=5665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:12.907170 sshd[5665]: Accepted publickey for core from 10.0.0.1 port 51860 ssh2: RSA SHA256:6trw0do8ovoIwkWpSqWPgGsMLbX9JFOWDr7uNmRxrVo Feb 9 19:47:12.908771 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:47:12.907000 audit[5665]: CRED_ACQ pid=5665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:12.913042 kernel: audit: type=1101 audit(1707508032.905:515): pid=5665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:12.913094 kernel: audit: type=1103 audit(1707508032.907:516): pid=5665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:12.913114 kernel: audit: type=1006 audit(1707508032.907:517): pid=5665 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Feb 9 19:47:12.912739 systemd-logind[1192]: New session 28 of user core. Feb 9 19:47:12.913426 systemd[1]: Started session-28.scope. Feb 9 19:47:12.907000 audit[5665]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3d2bf560 a2=3 a3=0 items=0 ppid=1 pid=5665 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:12.918132 kernel: audit: type=1300 audit(1707508032.907:517): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3d2bf560 a2=3 a3=0 items=0 ppid=1 pid=5665 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:47:12.918179 kernel: audit: type=1327 audit(1707508032.907:517): proctitle=737368643A20636F7265205B707269765D Feb 9 19:47:12.907000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 19:47:12.919336 kernel: audit: type=1105 audit(1707508032.917:518): pid=5665 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:12.917000 audit[5665]: USER_START pid=5665 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:12.918000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:12.924639 kernel: audit: type=1103 audit(1707508032.918:519): pid=5668 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:13.012727 sshd[5665]: pam_unix(sshd:session): session closed for user core Feb 9 19:47:13.012000 audit[5665]: USER_END pid=5665 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:13.015075 systemd[1]: sshd@27-10.0.0.60:22-10.0.0.1:51860.service: Deactivated successfully. Feb 9 19:47:13.015995 systemd[1]: session-28.scope: Deactivated successfully. Feb 9 19:47:13.016857 systemd-logind[1192]: Session 28 logged out. Waiting for processes to exit. Feb 9 19:47:13.019325 kernel: audit: type=1106 audit(1707508033.012:520): pid=5665 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:13.019452 kernel: audit: type=1104 audit(1707508033.012:521): pid=5665 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:13.012000 audit[5665]: CRED_DISP pid=5665 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Feb 9 19:47:13.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.60:22-10.0.0.1:51860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:47:13.017629 systemd-logind[1192]: Removed session 28.