Jul 2 06:56:43.819448 kernel: Linux version 6.1.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 23:29:55 -00 2024 Jul 2 06:56:43.820449 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 06:56:43.820460 kernel: BIOS-provided physical RAM map: Jul 2 06:56:43.820465 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 06:56:43.820470 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 06:56:43.820474 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 06:56:43.820480 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 06:56:43.820485 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 06:56:43.820490 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 06:56:43.820495 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 06:56:43.820501 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 2 06:56:43.820506 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 2 06:56:43.820511 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 2 06:56:43.820516 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 2 06:56:43.820523 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 06:56:43.820529 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 06:56:43.820535 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 06:56:43.820540 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 06:56:43.820547 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 06:56:43.820554 kernel: NX (Execute Disable) protection: active Jul 2 06:56:43.820561 kernel: efi: EFI v2.70 by EDK II Jul 2 06:56:43.820568 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b775018 Jul 2 06:56:43.820573 kernel: SMBIOS 2.8 present. Jul 2 06:56:43.820578 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jul 2 06:56:43.820583 kernel: Hypervisor detected: KVM Jul 2 06:56:43.820588 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 06:56:43.820593 kernel: kvm-clock: using sched offset of 7017956836 cycles Jul 2 06:56:43.820601 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 06:56:43.820606 kernel: tsc: Detected 2794.748 MHz processor Jul 2 06:56:43.820612 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 06:56:43.820617 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 06:56:43.820623 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 2 06:56:43.820628 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 06:56:43.820633 kernel: Using GB pages for direct mapping Jul 2 06:56:43.820639 kernel: Secure boot disabled Jul 2 06:56:43.820645 kernel: ACPI: Early table checksum verification disabled Jul 2 06:56:43.820651 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 2 06:56:43.820656 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jul 2 06:56:43.820661 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:56:43.820667 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:56:43.820675 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 2 06:56:43.820681 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:56:43.820690 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:56:43.820698 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:56:43.820708 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 2 06:56:43.820714 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jul 2 06:56:43.820720 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jul 2 06:56:43.820726 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 2 06:56:43.820733 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jul 2 06:56:43.820753 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jul 2 06:56:43.820760 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jul 2 06:56:43.820766 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jul 2 06:56:43.820771 kernel: No NUMA configuration found Jul 2 06:56:43.820777 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 2 06:56:43.820783 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 2 06:56:43.820789 kernel: Zone ranges: Jul 2 06:56:43.820795 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 06:56:43.820801 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 2 06:56:43.820806 kernel: Normal empty Jul 2 06:56:43.820813 kernel: Movable zone start for each node Jul 2 06:56:43.820819 kernel: Early memory node ranges Jul 2 06:56:43.820824 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 06:56:43.820830 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 2 06:56:43.820836 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 2 06:56:43.820842 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 2 06:56:43.820847 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 2 06:56:43.820853 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 2 06:56:43.820859 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 2 06:56:43.820866 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 06:56:43.820871 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 06:56:43.820877 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 2 06:56:43.820883 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 06:56:43.820888 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 2 06:56:43.820894 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 2 06:56:43.820900 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 2 06:56:43.820906 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 06:56:43.820911 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 06:56:43.820917 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 06:56:43.820924 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 06:56:43.820930 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 06:56:43.820935 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 06:56:43.820941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 06:56:43.820947 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 06:56:43.820953 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 06:56:43.820958 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 06:56:43.820964 kernel: TSC deadline timer available Jul 2 06:56:43.820970 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 06:56:43.820979 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 06:56:43.820987 kernel: kvm-guest: setup PV sched yield Jul 2 06:56:43.820995 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jul 2 06:56:43.821001 kernel: Booting paravirtualized kernel on KVM Jul 2 06:56:43.821007 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 06:56:43.821013 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 2 06:56:43.821018 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u524288 Jul 2 06:56:43.821024 kernel: pcpu-alloc: s194792 r8192 d30488 u524288 alloc=1*2097152 Jul 2 06:56:43.821030 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 06:56:43.821037 kernel: kvm-guest: PV spinlocks enabled Jul 2 06:56:43.821042 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 06:56:43.821048 kernel: Fallback order for Node 0: 0 Jul 2 06:56:43.821054 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 2 06:56:43.821060 kernel: Policy zone: DMA32 Jul 2 06:56:43.821066 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 06:56:43.821073 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 06:56:43.821079 kernel: random: crng init done Jul 2 06:56:43.821086 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 06:56:43.821092 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 06:56:43.821098 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 06:56:43.821104 kernel: Memory: 2392604K/2567000K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 174136K reserved, 0K cma-reserved) Jul 2 06:56:43.821110 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 06:56:43.821134 kernel: ftrace: allocating 36081 entries in 141 pages Jul 2 06:56:43.821140 kernel: ftrace: allocated 141 pages with 4 groups Jul 2 06:56:43.821145 kernel: Dynamic Preempt: voluntary Jul 2 06:56:43.821151 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 06:56:43.821159 kernel: rcu: RCU event tracing is enabled. Jul 2 06:56:43.821166 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 06:56:43.821172 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 06:56:43.821177 kernel: Rude variant of Tasks RCU enabled. Jul 2 06:56:43.821183 kernel: Tracing variant of Tasks RCU enabled. Jul 2 06:56:43.821193 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 06:56:43.821201 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 06:56:43.821207 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 06:56:43.821213 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 06:56:43.821219 kernel: Console: colour dummy device 80x25 Jul 2 06:56:43.821225 kernel: printk: console [ttyS0] enabled Jul 2 06:56:43.821231 kernel: ACPI: Core revision 20220331 Jul 2 06:56:43.821238 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 06:56:43.821248 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 06:56:43.821254 kernel: x2apic enabled Jul 2 06:56:43.821260 kernel: Switched APIC routing to physical x2apic. Jul 2 06:56:43.821266 kernel: kvm-guest: setup PV IPIs Jul 2 06:56:43.821273 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 06:56:43.821279 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 06:56:43.821285 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 2 06:56:43.821291 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 06:56:43.821297 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 06:56:43.821303 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 06:56:43.821310 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 06:56:43.821316 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 06:56:43.821322 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 06:56:43.821329 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 06:56:43.821335 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 06:56:43.821341 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 06:56:43.821347 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 06:56:43.821353 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 06:56:43.821359 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 06:56:43.821365 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 06:56:43.821371 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 06:56:43.821378 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 06:56:43.821388 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 2 06:56:43.821396 kernel: Freeing SMP alternatives memory: 32K Jul 2 06:56:43.821403 kernel: pid_max: default: 32768 minimum: 301 Jul 2 06:56:43.821409 kernel: LSM: Security Framework initializing Jul 2 06:56:43.821415 kernel: SELinux: Initializing. Jul 2 06:56:43.821421 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 06:56:43.821427 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 06:56:43.821433 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 06:56:43.821439 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 06:56:43.821447 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jul 2 06:56:43.821453 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 06:56:43.821459 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jul 2 06:56:43.821465 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 06:56:43.821471 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jul 2 06:56:43.821477 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 06:56:43.821482 kernel: ... version: 0 Jul 2 06:56:43.821488 kernel: ... bit width: 48 Jul 2 06:56:43.821494 kernel: ... generic registers: 6 Jul 2 06:56:43.821500 kernel: ... value mask: 0000ffffffffffff Jul 2 06:56:43.821509 kernel: ... max period: 00007fffffffffff Jul 2 06:56:43.821517 kernel: ... fixed-purpose events: 0 Jul 2 06:56:43.821525 kernel: ... event mask: 000000000000003f Jul 2 06:56:43.821532 kernel: signal: max sigframe size: 1776 Jul 2 06:56:43.821538 kernel: rcu: Hierarchical SRCU implementation. Jul 2 06:56:43.821545 kernel: rcu: Max phase no-delay instances is 400. Jul 2 06:56:43.821551 kernel: smp: Bringing up secondary CPUs ... Jul 2 06:56:43.821557 kernel: x86: Booting SMP configuration: Jul 2 06:56:43.821563 kernel: .... node #0, CPUs: #1 #2 #3 Jul 2 06:56:43.821570 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 06:56:43.821576 kernel: smpboot: Max logical packages: 1 Jul 2 06:56:43.821582 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 2 06:56:43.821588 kernel: devtmpfs: initialized Jul 2 06:56:43.821594 kernel: x86/mm: Memory block size: 128MB Jul 2 06:56:43.821600 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 2 06:56:43.821606 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 2 06:56:43.821612 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 2 06:56:43.821619 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 2 06:56:43.821626 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 2 06:56:43.821632 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 06:56:43.821638 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 06:56:43.821644 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 06:56:43.821651 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 06:56:43.821658 kernel: audit: initializing netlink subsys (disabled) Jul 2 06:56:43.821665 kernel: audit: type=2000 audit(1719903403.704:1): state=initialized audit_enabled=0 res=1 Jul 2 06:56:43.821671 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 06:56:43.821677 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 06:56:43.821684 kernel: cpuidle: using governor menu Jul 2 06:56:43.821690 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 06:56:43.821696 kernel: dca service started, version 1.12.1 Jul 2 06:56:43.821702 kernel: PCI: Using configuration type 1 for base access Jul 2 06:56:43.821708 kernel: PCI: Using configuration type 1 for extended access Jul 2 06:56:43.821714 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 06:56:43.821720 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 06:56:43.821726 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 06:56:43.821732 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 06:56:43.821748 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 06:56:43.821755 kernel: ACPI: Added _OSI(Module Device) Jul 2 06:56:43.821761 kernel: ACPI: Added _OSI(Processor Device) Jul 2 06:56:43.821767 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 06:56:43.821772 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 06:56:43.821779 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 06:56:43.821784 kernel: ACPI: Interpreter enabled Jul 2 06:56:43.821790 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 06:56:43.821796 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 06:56:43.821804 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 06:56:43.821810 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 06:56:43.821816 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 06:56:43.821822 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 06:56:43.822986 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 06:56:43.823000 kernel: acpiphp: Slot [3] registered Jul 2 06:56:43.823007 kernel: acpiphp: Slot [4] registered Jul 2 06:56:43.823012 kernel: acpiphp: Slot [5] registered Jul 2 06:56:43.823022 kernel: acpiphp: Slot [6] registered Jul 2 06:56:43.823028 kernel: acpiphp: Slot [7] registered Jul 2 06:56:43.823034 kernel: acpiphp: Slot [8] registered Jul 2 06:56:43.823040 kernel: acpiphp: Slot [9] registered Jul 2 06:56:43.823046 kernel: acpiphp: Slot [10] registered Jul 2 06:56:43.823052 kernel: acpiphp: Slot [11] registered Jul 2 06:56:43.823057 kernel: acpiphp: Slot [12] registered Jul 2 06:56:43.823063 kernel: acpiphp: Slot [13] registered Jul 2 06:56:43.823069 kernel: acpiphp: Slot [14] registered Jul 2 06:56:43.823077 kernel: acpiphp: Slot [15] registered Jul 2 06:56:43.823083 kernel: acpiphp: Slot [16] registered Jul 2 06:56:43.823088 kernel: acpiphp: Slot [17] registered Jul 2 06:56:43.823103 kernel: acpiphp: Slot [18] registered Jul 2 06:56:43.823135 kernel: acpiphp: Slot [19] registered Jul 2 06:56:43.823142 kernel: acpiphp: Slot [20] registered Jul 2 06:56:43.823148 kernel: acpiphp: Slot [21] registered Jul 2 06:56:43.823154 kernel: acpiphp: Slot [22] registered Jul 2 06:56:43.823160 kernel: acpiphp: Slot [23] registered Jul 2 06:56:43.823166 kernel: acpiphp: Slot [24] registered Jul 2 06:56:43.823174 kernel: acpiphp: Slot [25] registered Jul 2 06:56:43.823180 kernel: acpiphp: Slot [26] registered Jul 2 06:56:43.823186 kernel: acpiphp: Slot [27] registered Jul 2 06:56:43.823192 kernel: acpiphp: Slot [28] registered Jul 2 06:56:43.823198 kernel: acpiphp: Slot [29] registered Jul 2 06:56:43.823204 kernel: acpiphp: Slot [30] registered Jul 2 06:56:43.823213 kernel: acpiphp: Slot [31] registered Jul 2 06:56:43.823221 kernel: PCI host bridge to bus 0000:00 Jul 2 06:56:43.823305 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 06:56:43.823376 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 06:56:43.823438 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 06:56:43.823497 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 06:56:43.823553 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jul 2 06:56:43.823620 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 06:56:43.823705 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 06:56:43.823798 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 06:56:43.823878 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 06:56:43.823945 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 06:56:43.824021 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 06:56:43.824093 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 06:56:43.824208 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 06:56:43.824317 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 06:56:43.824406 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 06:56:43.824481 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 06:56:43.824552 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 06:56:43.824624 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 06:56:43.824693 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 2 06:56:43.824787 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jul 2 06:56:43.824862 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 2 06:56:43.824937 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jul 2 06:56:43.825002 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 06:56:43.825124 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 06:56:43.825201 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 06:56:43.825279 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 2 06:56:43.825351 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 2 06:56:43.825430 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 06:56:43.825501 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 06:56:43.825577 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 2 06:56:43.825647 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 2 06:56:43.825732 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 06:56:43.825814 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 06:56:43.825882 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jul 2 06:56:43.825961 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 2 06:56:43.826030 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 2 06:56:43.826039 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 06:56:43.826045 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 06:56:43.826051 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 06:56:43.826059 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 06:56:43.826067 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 06:56:43.826076 kernel: iommu: Default domain type: Translated Jul 2 06:56:43.826084 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 06:56:43.826092 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 06:56:43.826098 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 06:56:43.826104 kernel: PTP clock support registered Jul 2 06:56:43.826110 kernel: Registered efivars operations Jul 2 06:56:43.826129 kernel: PCI: Using ACPI for IRQ routing Jul 2 06:56:43.826135 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 06:56:43.826141 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 2 06:56:43.826147 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 2 06:56:43.826153 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 2 06:56:43.826161 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 2 06:56:43.826234 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 06:56:43.826304 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 06:56:43.829796 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 06:56:43.829809 kernel: vgaarb: loaded Jul 2 06:56:43.829816 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 06:56:43.829822 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 06:56:43.829829 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 06:56:43.829835 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 06:56:43.829845 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 06:56:43.829851 kernel: pnp: PnP ACPI init Jul 2 06:56:43.829933 kernel: pnp 00:02: [dma 2] Jul 2 06:56:43.829943 kernel: pnp: PnP ACPI: found 6 devices Jul 2 06:56:43.829950 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 06:56:43.829956 kernel: NET: Registered PF_INET protocol family Jul 2 06:56:43.829962 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 06:56:43.829969 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 06:56:43.829977 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 06:56:43.829984 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 06:56:43.829992 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 06:56:43.830001 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 06:56:43.830007 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 06:56:43.830013 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 06:56:43.830020 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 06:56:43.830026 kernel: NET: Registered PF_XDP protocol family Jul 2 06:56:43.830105 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 2 06:56:43.830219 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 2 06:56:43.830299 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 06:56:43.830362 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 06:56:43.830427 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 06:56:43.830540 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 06:56:43.830608 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jul 2 06:56:43.830700 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 06:56:43.830789 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 06:56:43.830802 kernel: PCI: CLS 0 bytes, default 64 Jul 2 06:56:43.830811 kernel: Initialise system trusted keyrings Jul 2 06:56:43.830820 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 06:56:43.830829 kernel: Key type asymmetric registered Jul 2 06:56:43.830838 kernel: Asymmetric key parser 'x509' registered Jul 2 06:56:43.830846 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jul 2 06:56:43.830853 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 06:56:43.830859 kernel: io scheduler mq-deadline registered Jul 2 06:56:43.830869 kernel: io scheduler kyber registered Jul 2 06:56:43.830878 kernel: io scheduler bfq registered Jul 2 06:56:43.830891 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 06:56:43.830900 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 06:56:43.830908 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 06:56:43.830916 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 06:56:43.830925 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 06:56:43.830933 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 06:56:43.830941 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 06:56:43.830953 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 06:56:43.830960 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 06:56:43.830975 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 06:56:43.831064 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 06:56:43.831159 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 06:56:43.831230 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T06:56:43 UTC (1719903403) Jul 2 06:56:43.831303 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 06:56:43.831316 kernel: efifb: probing for efifb Jul 2 06:56:43.831323 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 2 06:56:43.831329 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 2 06:56:43.831336 kernel: efifb: scrolling: redraw Jul 2 06:56:43.831342 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 2 06:56:43.831349 kernel: Console: switching to colour frame buffer device 100x37 Jul 2 06:56:43.831355 kernel: fb0: EFI VGA frame buffer device Jul 2 06:56:43.831361 kernel: pstore: Registered efi as persistent store backend Jul 2 06:56:43.831368 kernel: NET: Registered PF_INET6 protocol family Jul 2 06:56:43.831374 kernel: Segment Routing with IPv6 Jul 2 06:56:43.831382 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 06:56:43.831388 kernel: NET: Registered PF_PACKET protocol family Jul 2 06:56:43.831394 kernel: Key type dns_resolver registered Jul 2 06:56:43.831400 kernel: IPI shorthand broadcast: enabled Jul 2 06:56:43.831407 kernel: sched_clock: Marking stable (515582491, 113249905)->(684220729, -55388333) Jul 2 06:56:43.831413 kernel: registered taskstats version 1 Jul 2 06:56:43.831422 kernel: Loading compiled-in X.509 certificates Jul 2 06:56:43.831428 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.96-flatcar: ad4c54fcfdf0a10b17828c4377e868762dc43797' Jul 2 06:56:43.831435 kernel: Key type .fscrypt registered Jul 2 06:56:43.831441 kernel: Key type fscrypt-provisioning registered Jul 2 06:56:43.831447 kernel: pstore: Using crash dump compression: deflate Jul 2 06:56:43.831454 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 06:56:43.831460 kernel: ima: Allocated hash algorithm: sha1 Jul 2 06:56:43.831469 kernel: ima: No architecture policies found Jul 2 06:56:43.831480 kernel: clk: Disabling unused clocks Jul 2 06:56:43.831489 kernel: Freeing unused kernel image (initmem) memory: 47156K Jul 2 06:56:43.831498 kernel: Write protecting the kernel read-only data: 34816k Jul 2 06:56:43.831508 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 06:56:43.831517 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jul 2 06:56:43.831524 kernel: Run /init as init process Jul 2 06:56:43.831532 kernel: with arguments: Jul 2 06:56:43.831539 kernel: /init Jul 2 06:56:43.831545 kernel: with environment: Jul 2 06:56:43.831552 kernel: HOME=/ Jul 2 06:56:43.831558 kernel: TERM=linux Jul 2 06:56:43.831565 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 06:56:43.831574 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 06:56:43.831582 systemd[1]: Detected virtualization kvm. Jul 2 06:56:43.831590 systemd[1]: Detected architecture x86-64. Jul 2 06:56:43.831597 systemd[1]: Running in initrd. Jul 2 06:56:43.831604 systemd[1]: No hostname configured, using default hostname. Jul 2 06:56:43.831612 systemd[1]: Hostname set to . Jul 2 06:56:43.831620 systemd[1]: Initializing machine ID from VM UUID. Jul 2 06:56:43.831629 systemd[1]: Queued start job for default target initrd.target. Jul 2 06:56:43.831640 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:56:43.831649 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:56:43.831658 systemd[1]: Reached target paths.target - Path Units. Jul 2 06:56:43.831668 systemd[1]: Reached target slices.target - Slice Units. Jul 2 06:56:43.831677 systemd[1]: Reached target swap.target - Swaps. Jul 2 06:56:43.831685 systemd[1]: Reached target timers.target - Timer Units. Jul 2 06:56:43.831692 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 06:56:43.831699 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 06:56:43.831706 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jul 2 06:56:43.831713 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 06:56:43.831720 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 06:56:43.831729 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:56:43.831738 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 06:56:43.831758 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:56:43.831768 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 06:56:43.831777 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 06:56:43.831785 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 06:56:43.831792 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 06:56:43.831799 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 06:56:43.831806 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 06:56:43.831815 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jul 2 06:56:43.831822 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:56:43.831829 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 06:56:43.831836 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 06:56:43.831843 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 06:56:43.831850 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 06:56:43.831857 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 06:56:43.831865 kernel: audit: type=1130 audit(1719903403.826:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.831876 systemd-journald[195]: Journal started Jul 2 06:56:43.831917 systemd-journald[195]: Runtime Journal (/run/log/journal/09d69fbf60d1427cb3010302a6cdcb1f) is 6.0M, max 48.3M, 42.3M free. Jul 2 06:56:43.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.823139 systemd-modules-load[196]: Inserted module 'overlay' Jul 2 06:56:43.833133 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 06:56:43.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.838151 kernel: audit: type=1130 audit(1719903403.835:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.843282 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 06:56:43.850032 kernel: audit: type=1130 audit(1719903403.844:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.844940 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 06:56:43.845759 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 06:56:43.856233 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 06:56:43.858509 systemd-modules-load[196]: Inserted module 'br_netfilter' Jul 2 06:56:43.859705 kernel: Bridge firewalling registered Jul 2 06:56:43.859936 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:56:43.860407 dracut-cmdline[212]: dracut-dracut-053 Jul 2 06:56:43.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.865167 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 06:56:43.872841 kernel: audit: type=1130 audit(1719903403.863:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.872864 kernel: audit: type=1334 audit(1719903403.864:6): prog-id=6 op=LOAD Jul 2 06:56:43.864000 audit: BPF prog-id=6 op=LOAD Jul 2 06:56:43.874277 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 06:56:43.878070 kernel: SCSI subsystem initialized Jul 2 06:56:43.890818 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 06:56:43.890861 kernel: device-mapper: uevent: version 1.0.3 Jul 2 06:56:43.890871 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jul 2 06:56:43.893950 systemd-modules-load[196]: Inserted module 'dm_multipath' Jul 2 06:56:43.894808 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 06:56:43.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.901148 kernel: audit: type=1130 audit(1719903403.897:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.905258 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 06:56:43.912398 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:56:43.917148 kernel: audit: type=1130 audit(1719903403.912:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.926411 systemd-resolved[229]: Positive Trust Anchors: Jul 2 06:56:43.927169 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 06:56:43.930270 kernel: Loading iSCSI transport class v2.0-870. Jul 2 06:56:43.927206 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 06:56:43.929662 systemd-resolved[229]: Defaulting to hostname 'linux'. Jul 2 06:56:43.943325 kernel: audit: type=1130 audit(1719903403.938:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:43.930367 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 06:56:43.938695 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:56:43.950149 kernel: iscsi: registered transport (tcp) Jul 2 06:56:43.978155 kernel: iscsi: registered transport (qla4xxx) Jul 2 06:56:43.978206 kernel: QLogic iSCSI HBA Driver Jul 2 06:56:44.015359 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 06:56:44.020287 kernel: audit: type=1130 audit(1719903404.016:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:44.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:44.026411 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 06:56:44.092178 kernel: raid6: avx2x4 gen() 29458 MB/s Jul 2 06:56:44.109147 kernel: raid6: avx2x2 gen() 30919 MB/s Jul 2 06:56:44.126259 kernel: raid6: avx2x1 gen() 25209 MB/s Jul 2 06:56:44.126295 kernel: raid6: using algorithm avx2x2 gen() 30919 MB/s Jul 2 06:56:44.144300 kernel: raid6: .... xor() 18572 MB/s, rmw enabled Jul 2 06:56:44.144333 kernel: raid6: using avx2x2 recovery algorithm Jul 2 06:56:44.149149 kernel: xor: automatically using best checksumming function avx Jul 2 06:56:44.315177 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 06:56:44.325034 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 06:56:44.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:44.345000 audit: BPF prog-id=7 op=LOAD Jul 2 06:56:44.345000 audit: BPF prog-id=8 op=LOAD Jul 2 06:56:44.356338 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:56:44.388583 systemd-udevd[400]: Using default interface naming scheme 'v252'. Jul 2 06:56:44.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:44.392592 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:56:44.396630 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 06:56:44.409017 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jul 2 06:56:44.433177 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 06:56:44.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:44.451314 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 06:56:44.487939 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:56:44.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:44.522146 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 06:56:44.538997 kernel: libata version 3.00 loaded. Jul 2 06:56:44.539042 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 06:56:44.547294 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 2 06:56:44.552010 kernel: scsi host0: ata_piix Jul 2 06:56:44.552199 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 06:56:44.552328 kernel: scsi host1: ata_piix Jul 2 06:56:44.552465 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 06:56:44.552479 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 06:56:44.552490 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 06:56:44.552502 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 06:56:44.552514 kernel: GPT:9289727 != 19775487 Jul 2 06:56:44.552525 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 06:56:44.552537 kernel: GPT:9289727 != 19775487 Jul 2 06:56:44.552547 kernel: AES CTR mode by8 optimization enabled Jul 2 06:56:44.552559 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 06:56:44.552573 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:56:44.703198 kernel: ata2: found unknown device (class 0) Jul 2 06:56:44.704153 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 06:56:44.706138 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 06:56:44.746150 kernel: BTRFS: device fsid 1fca1e64-eeea-4360-9664-a9b6b3a60b6f devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (443) Jul 2 06:56:44.748133 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (452) Jul 2 06:56:44.751506 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 06:56:44.758818 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 06:56:44.764057 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 06:56:44.764137 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 06:56:44.771641 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 06:56:44.801075 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 06:56:44.801095 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:56:44.801107 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 06:56:44.772200 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 06:56:44.785388 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 06:56:44.804381 disk-uuid[526]: Primary Header is updated. Jul 2 06:56:44.804381 disk-uuid[526]: Secondary Entries is updated. Jul 2 06:56:44.804381 disk-uuid[526]: Secondary Header is updated. Jul 2 06:56:44.807955 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:56:44.812154 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:56:45.835150 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:56:45.835504 disk-uuid[527]: The operation has completed successfully. Jul 2 06:56:45.860098 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 06:56:45.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:45.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:45.860201 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 06:56:45.883309 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 06:56:45.885914 sh[543]: Success Jul 2 06:56:45.897137 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 06:56:45.922983 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 06:56:45.939919 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 06:56:45.942435 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 06:56:45.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:45.951980 kernel: BTRFS info (device dm-0): first mount of filesystem 1fca1e64-eeea-4360-9664-a9b6b3a60b6f Jul 2 06:56:45.952009 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:56:45.952018 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 06:56:45.954835 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 06:56:45.954860 kernel: BTRFS info (device dm-0): using free space tree Jul 2 06:56:45.958594 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 06:56:45.959577 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 06:56:45.972251 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 06:56:45.974179 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 06:56:45.981784 kernel: BTRFS info (device vda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:56:45.981812 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:56:45.981821 kernel: BTRFS info (device vda6): using free space tree Jul 2 06:56:45.989966 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 06:56:45.991602 kernel: BTRFS info (device vda6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:56:46.045569 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 06:56:46.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.046000 audit: BPF prog-id=9 op=LOAD Jul 2 06:56:46.059314 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 06:56:46.079081 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 06:56:46.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.080214 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 06:56:46.089898 systemd-networkd[722]: lo: Link UP Jul 2 06:56:46.089909 systemd-networkd[722]: lo: Gained carrier Jul 2 06:56:46.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.090394 systemd-networkd[722]: Enumeration completed Jul 2 06:56:46.090579 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 06:56:46.090641 systemd-networkd[722]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:56:46.090645 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 06:56:46.090850 systemd[1]: Reached target network.target - Network. Jul 2 06:56:46.091803 systemd-networkd[722]: eth0: Link UP Jul 2 06:56:46.091806 systemd-networkd[722]: eth0: Gained carrier Jul 2 06:56:46.091813 systemd-networkd[722]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:56:46.100230 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 06:56:46.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.104415 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 06:56:46.112891 iscsid[730]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 06:56:46.112891 iscsid[730]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 06:56:46.112891 iscsid[730]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 06:56:46.112891 iscsid[730]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 06:56:46.112891 iscsid[730]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 06:56:46.112891 iscsid[730]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 06:56:46.112891 iscsid[730]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 06:56:46.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.107245 systemd[1]: Starting iscsid.service - Open-iSCSI... Jul 2 06:56:46.108265 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 06:56:46.115086 systemd[1]: Started iscsid.service - Open-iSCSI. Jul 2 06:56:46.131231 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 06:56:46.133593 ignition[724]: Ignition 2.15.0 Jul 2 06:56:46.133603 ignition[724]: Stage: fetch-offline Jul 2 06:56:46.133636 ignition[724]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:56:46.133644 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 06:56:46.133740 ignition[724]: parsed url from cmdline: "" Jul 2 06:56:46.133743 ignition[724]: no config URL provided Jul 2 06:56:46.133748 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 06:56:46.133757 ignition[724]: no config at "/usr/lib/ignition/user.ign" Jul 2 06:56:46.133779 ignition[724]: op(1): [started] loading QEMU firmware config module Jul 2 06:56:46.133784 ignition[724]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 06:56:46.141004 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 06:56:46.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.142895 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 06:56:46.144997 ignition[724]: op(1): [finished] loading QEMU firmware config module Jul 2 06:56:46.145144 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:56:46.146433 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 06:56:46.155299 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 06:56:46.162566 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 06:56:46.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.193690 ignition[724]: parsing config with SHA512: 083cefc5a5b88d10ba6665bc7cf4ef8a7865d71b17b4b5e2ba88cee2e71ec257bee05709c1bb1e53809575559acabcc0336098f2bb6386e669a5d8b86e8ec1e0 Jul 2 06:56:46.198416 unknown[724]: fetched base config from "system" Jul 2 06:56:46.198428 unknown[724]: fetched user config from "qemu" Jul 2 06:56:46.198847 ignition[724]: fetch-offline: fetch-offline passed Jul 2 06:56:46.198905 ignition[724]: Ignition finished successfully Jul 2 06:56:46.205614 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 06:56:46.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.206819 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 06:56:46.218295 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 06:56:46.229685 ignition[753]: Ignition 2.15.0 Jul 2 06:56:46.229695 ignition[753]: Stage: kargs Jul 2 06:56:46.229785 ignition[753]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:56:46.229794 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 06:56:46.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.232076 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 06:56:46.230705 ignition[753]: kargs: kargs passed Jul 2 06:56:46.230741 ignition[753]: Ignition finished successfully Jul 2 06:56:46.243247 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 06:56:46.252882 ignition[761]: Ignition 2.15.0 Jul 2 06:56:46.252894 ignition[761]: Stage: disks Jul 2 06:56:46.253000 ignition[761]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:56:46.253012 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 06:56:46.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.255111 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 06:56:46.254170 ignition[761]: disks: disks passed Jul 2 06:56:46.256579 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 06:56:46.254217 ignition[761]: Ignition finished successfully Jul 2 06:56:46.258604 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 06:56:46.259722 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 06:56:46.261631 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 06:56:46.261670 systemd[1]: Reached target basic.target - Basic System. Jul 2 06:56:46.270244 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 06:56:46.278505 systemd-fsck[770]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 06:56:46.311857 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 06:56:46.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.324251 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 06:56:46.397142 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jul 2 06:56:46.397441 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 06:56:46.398463 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 06:56:46.408198 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 06:56:46.409887 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 06:56:46.417016 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (776) Jul 2 06:56:46.412186 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 06:56:46.422345 kernel: BTRFS info (device vda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:56:46.422361 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:56:46.422370 kernel: BTRFS info (device vda6): using free space tree Jul 2 06:56:46.412217 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 06:56:46.412235 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 06:56:46.415084 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 06:56:46.415821 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 06:56:46.423486 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 06:56:46.449148 initrd-setup-root[800]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 06:56:46.452551 initrd-setup-root[807]: cut: /sysroot/etc/group: No such file or directory Jul 2 06:56:46.454811 initrd-setup-root[814]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 06:56:46.457823 initrd-setup-root[821]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 06:56:46.511854 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 06:56:46.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.523210 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 06:56:46.524686 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 06:56:46.530143 kernel: BTRFS info (device vda6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:56:46.540308 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 06:56:46.541232 ignition[888]: INFO : Ignition 2.15.0 Jul 2 06:56:46.541232 ignition[888]: INFO : Stage: mount Jul 2 06:56:46.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.543560 ignition[888]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:56:46.543560 ignition[888]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 06:56:46.543560 ignition[888]: INFO : mount: mount passed Jul 2 06:56:46.543560 ignition[888]: INFO : Ignition finished successfully Jul 2 06:56:46.547909 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 06:56:46.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:46.558266 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 06:56:46.950685 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 06:56:46.960342 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 06:56:46.966152 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (900) Jul 2 06:56:46.966188 kernel: BTRFS info (device vda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:56:46.967502 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:56:46.967521 kernel: BTRFS info (device vda6): using free space tree Jul 2 06:56:46.971193 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 06:56:46.993851 ignition[918]: INFO : Ignition 2.15.0 Jul 2 06:56:46.993851 ignition[918]: INFO : Stage: files Jul 2 06:56:46.996024 ignition[918]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:56:46.996024 ignition[918]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 06:56:46.996024 ignition[918]: DEBUG : files: compiled without relabeling support, skipping Jul 2 06:56:46.996024 ignition[918]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 06:56:46.996024 ignition[918]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 06:56:47.012817 ignition[918]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 06:56:47.012817 ignition[918]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 06:56:47.012817 ignition[918]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 06:56:47.012817 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 06:56:47.012817 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 06:56:47.012817 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 06:56:47.012817 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 06:56:46.998631 unknown[918]: wrote ssh authorized keys file for user: core Jul 2 06:56:47.034886 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 06:56:47.105602 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 06:56:47.107820 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 06:56:47.166255 systemd-networkd[722]: eth0: Gained IPv6LL Jul 2 06:56:47.468258 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 06:56:47.835008 ignition[918]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 06:56:47.835008 ignition[918]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 2 06:56:47.838988 ignition[918]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 06:56:47.841681 ignition[918]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 06:56:47.841681 ignition[918]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 2 06:56:47.841681 ignition[918]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 2 06:56:47.846527 ignition[918]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 06:56:47.848817 ignition[918]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 06:56:47.848817 ignition[918]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 2 06:56:47.848817 ignition[918]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 2 06:56:47.853911 ignition[918]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 06:56:47.855982 ignition[918]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 06:56:47.855982 ignition[918]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 2 06:56:47.859556 ignition[918]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 06:56:47.859556 ignition[918]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 06:56:47.871598 ignition[918]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 06:56:47.873435 ignition[918]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 06:56:47.873435 ignition[918]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jul 2 06:56:47.876405 ignition[918]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 06:56:47.877844 ignition[918]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 06:56:47.879638 ignition[918]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 06:56:47.879638 ignition[918]: INFO : files: files passed Jul 2 06:56:47.882081 ignition[918]: INFO : Ignition finished successfully Jul 2 06:56:47.883887 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 06:56:47.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:47.895222 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 06:56:47.896724 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 06:56:47.900943 initrd-setup-root-after-ignition[942]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 06:56:47.902909 initrd-setup-root-after-ignition[945]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:56:47.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:47.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:47.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:47.907990 initrd-setup-root-after-ignition[945]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:56:47.902923 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 06:56:47.912155 initrd-setup-root-after-ignition[949]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:56:47.902988 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 06:56:47.904775 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 06:56:47.906936 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 06:56:47.908550 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 06:56:47.919992 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 06:56:47.920060 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 06:56:47.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:47.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:47.922086 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 06:56:47.924253 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 06:56:47.925284 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 06:56:47.925868 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 06:56:47.938078 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 06:56:47.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:47.939726 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 06:56:47.949274 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:56:47.950440 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:56:47.952609 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 06:56:47.954640 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 06:56:47.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:47.954724 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 06:56:47.956677 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 06:56:47.958492 systemd[1]: Stopped target basic.target - Basic System. Jul 2 06:56:47.960564 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 06:56:47.962541 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 06:56:47.964420 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 06:56:47.966534 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 06:56:47.968620 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 06:56:47.970722 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 06:56:47.972674 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 06:56:47.974820 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jul 2 06:56:47.976820 systemd[1]: Stopped target swap.target - Swaps. Jul 2 06:56:47.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:47.978524 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 06:56:47.978607 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 06:56:47.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:47.980868 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:56:47.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:47.982565 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 06:56:47.982662 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 06:56:47.984604 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 06:56:47.984708 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 06:56:47.986720 systemd[1]: Stopped target paths.target - Path Units. Jul 2 06:56:47.988538 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 06:56:47.992165 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:56:47.993714 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 06:56:47.995389 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 06:56:48.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:47.997310 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 06:56:48.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:47.997375 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 06:56:47.999698 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 06:56:47.999782 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 06:56:48.001600 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 06:56:48.001691 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 06:56:48.011287 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 06:56:48.012355 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 06:56:48.014806 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 06:56:48.015890 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 06:56:48.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.016074 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:56:48.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.018078 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 06:56:48.018235 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 06:56:48.022036 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 06:56:48.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.025315 ignition[963]: INFO : Ignition 2.15.0 Jul 2 06:56:48.025315 ignition[963]: INFO : Stage: umount Jul 2 06:56:48.025315 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:56:48.025315 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 06:56:48.025315 ignition[963]: INFO : umount: umount passed Jul 2 06:56:48.025315 ignition[963]: INFO : Ignition finished successfully Jul 2 06:56:48.022125 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 06:56:48.031864 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 06:56:48.032893 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 06:56:48.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.036262 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 06:56:48.037577 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 06:56:48.038580 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 06:56:48.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.041304 systemd[1]: Stopped target network.target - Network. Jul 2 06:56:48.042992 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 06:56:48.043819 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 06:56:48.045820 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 06:56:48.046743 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 06:56:48.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.048649 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 06:56:48.048681 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 06:56:48.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.051407 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 06:56:48.051434 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 06:56:48.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.054317 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 06:56:48.056451 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 06:56:48.061154 systemd-networkd[722]: eth0: DHCPv6 lease lost Jul 2 06:56:48.062667 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 06:56:48.062757 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 06:56:48.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.063784 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 06:56:48.063811 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:56:48.079242 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 06:56:48.079000 audit: BPF prog-id=9 op=UNLOAD Jul 2 06:56:48.079329 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 06:56:48.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.079403 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 06:56:48.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.081370 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 06:56:48.081405 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:56:48.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.085440 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 06:56:48.085477 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 06:56:48.086476 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:56:48.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.090104 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 06:56:48.090593 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 06:56:48.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.090707 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 06:56:48.097000 audit: BPF prog-id=6 op=UNLOAD Jul 2 06:56:48.092450 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 06:56:48.092494 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:56:48.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.097696 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 2 06:56:48.104541 kernel: kauditd_printk_skb: 55 callbacks suppressed Jul 2 06:56:48.104565 kernel: audit: type=1131 audit(1719903408.101:66): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.098182 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 06:56:48.098261 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 06:56:48.100060 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 06:56:48.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.100200 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:56:48.119265 kernel: audit: type=1131 audit(1719903408.105:67): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.119290 kernel: audit: type=1131 audit(1719903408.114:68): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.119303 kernel: audit: type=1131 audit(1719903408.116:69): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.101543 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 06:56:48.125382 kernel: audit: type=1131 audit(1719903408.121:70): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.101575 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 06:56:48.104728 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 06:56:48.104756 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:56:48.105043 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 06:56:48.105081 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 06:56:48.105568 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 06:56:48.105599 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 06:56:48.114428 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 06:56:48.114462 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 06:56:48.116689 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 06:56:48.116722 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 06:56:48.137255 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 06:56:48.138235 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 06:56:48.143851 kernel: audit: type=1131 audit(1719903408.140:71): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.138283 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 06:56:48.143896 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 06:56:48.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.143932 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:56:48.152731 kernel: audit: type=1131 audit(1719903408.146:72): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.152749 kernel: audit: type=1131 audit(1719903408.150:73): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.146254 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 06:56:48.157979 kernel: audit: type=1131 audit(1719903408.152:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.146287 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 06:56:48.151237 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 06:56:48.151637 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 06:56:48.151727 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 06:56:48.164655 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 06:56:48.164726 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 06:56:48.170307 kernel: audit: type=1130 audit(1719903408.165:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:48.165925 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 06:56:48.186264 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 06:56:48.192579 systemd[1]: Switching root. Jul 2 06:56:48.193000 audit: BPF prog-id=8 op=UNLOAD Jul 2 06:56:48.193000 audit: BPF prog-id=7 op=UNLOAD Jul 2 06:56:48.194000 audit: BPF prog-id=5 op=UNLOAD Jul 2 06:56:48.194000 audit: BPF prog-id=4 op=UNLOAD Jul 2 06:56:48.194000 audit: BPF prog-id=3 op=UNLOAD Jul 2 06:56:48.212012 iscsid[730]: iscsid shutting down. Jul 2 06:56:48.212871 systemd-journald[195]: Received SIGTERM from PID 1 (n/a). Jul 2 06:56:48.212913 systemd-journald[195]: Journal stopped Jul 2 06:56:49.034736 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jul 2 06:56:49.036678 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 06:56:49.036709 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 06:56:49.036720 kernel: SELinux: policy capability open_perms=1 Jul 2 06:56:49.036732 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 06:56:49.036743 kernel: SELinux: policy capability always_check_network=0 Jul 2 06:56:49.036759 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 06:56:49.036770 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 06:56:49.036782 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 06:56:49.036792 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 06:56:49.036804 systemd[1]: Successfully loaded SELinux policy in 38.749ms. Jul 2 06:56:49.036834 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.949ms. Jul 2 06:56:49.036847 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 06:56:49.038186 systemd[1]: Detected virtualization kvm. Jul 2 06:56:49.038214 systemd[1]: Detected architecture x86-64. Jul 2 06:56:49.038226 systemd[1]: Detected first boot. Jul 2 06:56:49.038240 systemd[1]: Initializing machine ID from VM UUID. Jul 2 06:56:49.038252 systemd[1]: Populated /etc with preset unit settings. Jul 2 06:56:49.038263 systemd[1]: Queued start job for default target multi-user.target. Jul 2 06:56:49.038274 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 06:56:49.038291 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 06:56:49.038303 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 06:56:49.038315 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 06:56:49.038326 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 06:56:49.038338 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 06:56:49.038351 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 06:56:49.038363 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 06:56:49.038380 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 06:56:49.038393 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:56:49.038405 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 06:56:49.038417 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 06:56:49.038428 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 06:56:49.038440 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 06:56:49.038453 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:56:49.038465 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 06:56:49.038478 systemd[1]: Reached target slices.target - Slice Units. Jul 2 06:56:49.038491 systemd[1]: Reached target swap.target - Swaps. Jul 2 06:56:49.038513 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 06:56:49.038525 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 06:56:49.038540 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jul 2 06:56:49.038552 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jul 2 06:56:49.038570 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 06:56:49.038581 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 06:56:49.038593 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:56:49.038618 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 06:56:49.038637 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:56:49.038649 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 06:56:49.038661 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 06:56:49.038672 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 06:56:49.038683 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 06:56:49.038697 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:56:49.038708 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 06:56:49.038720 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 06:56:49.038731 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 06:56:49.038742 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 06:56:49.038761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:56:49.038772 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 06:56:49.038784 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 06:56:49.038795 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:56:49.038808 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 06:56:49.038819 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:56:49.038831 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 06:56:49.038842 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:56:49.038854 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 06:56:49.038866 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 06:56:49.038888 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 2 06:56:49.038899 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 06:56:49.038911 kernel: fuse: init (API version 7.37) Jul 2 06:56:49.038922 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 06:56:49.038933 kernel: loop: module loaded Jul 2 06:56:49.038944 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 06:56:49.038956 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 06:56:49.038967 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 06:56:49.038979 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:56:49.038991 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 06:56:49.039002 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 06:56:49.039015 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 06:56:49.039029 systemd-journald[1098]: Journal started Jul 2 06:56:49.040142 systemd-journald[1098]: Runtime Journal (/run/log/journal/09d69fbf60d1427cb3010302a6cdcb1f) is 6.0M, max 48.3M, 42.3M free. Jul 2 06:56:48.920000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 06:56:48.920000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 06:56:49.033000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 06:56:49.033000 audit[1098]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fffde057960 a2=4000 a3=7fffde0579fc items=0 ppid=1 pid=1098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:49.033000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 06:56:49.042322 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 06:56:49.044727 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 06:56:49.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.045535 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 06:56:49.046927 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 06:56:49.048681 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:56:49.050541 kernel: ACPI: bus type drm_connector registered Jul 2 06:56:49.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.050999 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 06:56:49.051196 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 06:56:49.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.052975 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:56:49.053271 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:56:49.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.055064 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 06:56:49.055251 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 06:56:49.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.056790 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:56:49.056952 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:56:49.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.058540 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 06:56:49.058722 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 06:56:49.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.060385 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 06:56:49.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.062291 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:56:49.062460 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:56:49.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.064169 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 06:56:49.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.065951 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 06:56:49.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.067477 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 06:56:49.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.069102 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 06:56:49.084276 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 06:56:49.086891 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 06:56:49.088024 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 06:56:49.090056 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 06:56:49.093003 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 06:56:49.094193 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:56:49.095674 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jul 2 06:56:49.097713 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:56:49.099139 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 06:56:49.102268 systemd-journald[1098]: Time spent on flushing to /var/log/journal/09d69fbf60d1427cb3010302a6cdcb1f is 18.360ms for 1083 entries. Jul 2 06:56:49.102268 systemd-journald[1098]: System Journal (/var/log/journal/09d69fbf60d1427cb3010302a6cdcb1f) is 8.0M, max 195.6M, 187.6M free. Jul 2 06:56:49.128784 systemd-journald[1098]: Received client request to flush runtime journal. Jul 2 06:56:49.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.101963 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 06:56:49.106742 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:56:49.108034 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 06:56:49.109241 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 06:56:49.110670 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jul 2 06:56:49.112230 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 06:56:49.121263 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 06:56:49.122701 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:56:49.124310 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 06:56:49.127028 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 06:56:49.131471 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 06:56:49.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.135556 udevadm[1149]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 06:56:49.145791 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 06:56:49.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.712903 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 06:56:49.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.724340 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:56:49.739735 systemd-udevd[1163]: Using default interface naming scheme 'v252'. Jul 2 06:56:49.752717 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:56:49.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.759223 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 06:56:49.763438 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 06:56:49.778190 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1170) Jul 2 06:56:49.789152 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1181) Jul 2 06:56:49.806488 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 06:56:49.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.814449 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 2 06:56:49.820357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 06:56:49.847022 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jul 2 06:56:49.877795 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 06:56:49.877811 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 06:56:49.877823 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 06:56:49.877835 kernel: ACPI: button: Power Button [PWRF] Jul 2 06:56:49.880754 systemd-networkd[1177]: lo: Link UP Jul 2 06:56:49.881039 systemd-networkd[1177]: lo: Gained carrier Jul 2 06:56:49.881708 systemd-networkd[1177]: Enumeration completed Jul 2 06:56:49.881877 systemd-networkd[1177]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:56:49.881902 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 06:56:49.881989 systemd-networkd[1177]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 06:56:49.883158 systemd-networkd[1177]: eth0: Link UP Jul 2 06:56:49.883221 systemd-networkd[1177]: eth0: Gained carrier Jul 2 06:56:49.883270 systemd-networkd[1177]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:56:49.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:49.891524 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 06:56:49.943480 systemd-networkd[1177]: eth0: DHCPv4 address 10.0.0.85/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 06:56:49.966205 kernel: SVM: TSC scaling supported Jul 2 06:56:49.966310 kernel: kvm: Nested Virtualization enabled Jul 2 06:56:49.966346 kernel: SVM: kvm: Nested Paging enabled Jul 2 06:56:49.967183 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 2 06:56:49.967214 kernel: SVM: Virtual GIF supported Jul 2 06:56:49.968158 kernel: SVM: LBR virtualization supported Jul 2 06:56:49.984246 kernel: EDAC MC: Ver: 3.0.0 Jul 2 06:56:50.040395 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 06:56:50.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.059295 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 06:56:50.065814 lvm[1206]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 06:56:50.095289 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 06:56:50.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.109402 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:56:50.122304 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 06:56:50.127109 lvm[1209]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 06:56:50.155412 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 06:56:50.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.156768 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 06:56:50.157917 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 06:56:50.157941 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 06:56:50.159160 systemd[1]: Reached target machines.target - Containers. Jul 2 06:56:50.173380 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 06:56:50.175040 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:56:50.175165 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:56:50.176937 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jul 2 06:56:50.179198 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 06:56:50.182605 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 06:56:50.185563 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 06:56:50.187373 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1212 (bootctl) Jul 2 06:56:50.189426 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jul 2 06:56:50.197462 kernel: loop0: detected capacity change from 0 to 139360 Jul 2 06:56:50.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.196368 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 06:56:50.219136 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 06:56:50.237177 systemd-fsck[1220]: fsck.fat 4.2 (2021-01-31) Jul 2 06:56:50.237177 systemd-fsck[1220]: /dev/vda1: 809 files, 120401/258078 clusters Jul 2 06:56:50.238948 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jul 2 06:56:50.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.245319 systemd[1]: Mounting boot.mount - Boot partition... Jul 2 06:56:50.248169 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 06:56:50.384218 systemd[1]: Mounted boot.mount - Boot partition. Jul 2 06:56:50.401830 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 06:56:50.402687 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 06:56:50.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.405991 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jul 2 06:56:50.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.412129 kernel: loop2: detected capacity change from 0 to 80600 Jul 2 06:56:50.449139 kernel: loop3: detected capacity change from 0 to 139360 Jul 2 06:56:50.459136 kernel: loop4: detected capacity change from 0 to 209816 Jul 2 06:56:50.465139 kernel: loop5: detected capacity change from 0 to 80600 Jul 2 06:56:50.470753 (sd-sysext)[1231]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 06:56:50.471202 (sd-sysext)[1231]: Merged extensions into '/usr'. Jul 2 06:56:50.473685 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 06:56:50.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.483644 systemd[1]: Starting ensure-sysext.service... Jul 2 06:56:50.486184 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 06:56:50.495074 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 06:56:50.496069 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 06:56:50.496426 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 06:56:50.497297 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 06:56:50.498658 systemd[1]: Reloading. Jul 2 06:56:50.539381 ldconfig[1211]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 06:56:50.626737 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:56:50.680285 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 06:56:50.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.696875 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:56:50.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.700437 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 06:56:50.703670 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 06:56:50.706632 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 06:56:50.710344 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 06:56:50.715355 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 06:56:50.719377 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 06:56:50.726668 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:56:50.726885 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:56:50.728434 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:56:50.729000 audit[1313]: SYSTEM_BOOT pid=1313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.730855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:56:50.732969 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:56:50.734032 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:56:50.734149 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:56:50.734233 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:56:50.735047 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 06:56:50.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.736638 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:56:50.736769 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:56:50.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.738458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:56:50.738641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:56:50.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.740412 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:56:50.740585 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:56:50.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:50.744277 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:56:50.744478 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:56:50.749000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 06:56:50.749000 audit[1327]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff55eed6c0 a2=420 a3=0 items=0 ppid=1299 pid=1327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:50.749000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 06:56:50.750158 augenrules[1327]: No rules Jul 2 06:56:50.751480 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 06:56:50.753426 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 06:56:50.755096 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 06:56:50.758215 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:56:50.758432 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:56:50.760003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:56:50.764051 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:56:50.766410 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:56:50.767524 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:56:50.767638 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:56:50.767722 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 06:56:50.767785 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:56:50.768821 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 06:56:50.770510 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 06:56:50.772054 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:56:50.772336 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:56:50.773902 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:56:50.774034 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:56:50.775466 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:56:50.775612 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:56:50.777732 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:56:50.777827 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:56:50.780046 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:56:50.780289 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:56:50.797543 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:56:51.572530 systemd-timesyncd[1310]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 06:56:51.572571 systemd-timesyncd[1310]: Initial clock synchronization to Tue 2024-07-02 06:56:51.572466 UTC. Jul 2 06:56:51.573482 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 06:56:51.575727 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:56:51.578226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:56:51.579456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:56:51.579557 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:56:51.579661 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 06:56:51.579728 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:56:51.580575 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 06:56:51.582588 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:56:51.582718 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:56:51.584325 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 06:56:51.584551 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 06:56:51.586038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:56:51.586165 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:56:51.586551 systemd-resolved[1308]: Positive Trust Anchors: Jul 2 06:56:51.586565 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 06:56:51.586595 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 06:56:51.587627 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:56:51.587755 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:56:51.589428 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 06:56:51.590058 systemd-resolved[1308]: Defaulting to hostname 'linux'. Jul 2 06:56:51.590542 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:56:51.590572 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:56:51.590913 systemd[1]: Finished ensure-sysext.service. Jul 2 06:56:51.591927 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 06:56:51.593834 systemd[1]: Reached target network.target - Network. Jul 2 06:56:51.594747 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:56:51.595903 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 06:56:51.597103 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 06:56:51.598289 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 06:56:51.599561 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 06:56:51.600718 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 06:56:51.601836 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 06:56:51.603033 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 06:56:51.603058 systemd[1]: Reached target paths.target - Path Units. Jul 2 06:56:51.603996 systemd[1]: Reached target timers.target - Timer Units. Jul 2 06:56:51.605591 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 06:56:51.608171 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 06:56:51.610262 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 06:56:51.611465 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:56:51.624778 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 06:56:51.625999 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 06:56:51.626974 systemd[1]: Reached target basic.target - Basic System. Jul 2 06:56:51.628111 systemd[1]: System is tainted: cgroupsv1 Jul 2 06:56:51.628152 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 06:56:51.628172 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 06:56:51.629513 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 06:56:51.631593 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 06:56:51.634018 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 06:56:51.636301 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 06:56:51.637216 jq[1360]: false Jul 2 06:56:51.637513 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 06:56:51.638694 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 06:56:51.641170 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 06:56:51.643417 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 06:56:51.645738 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 06:56:51.648872 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 06:56:51.649933 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:56:51.652198 extend-filesystems[1361]: Found loop3 Jul 2 06:56:51.652198 extend-filesystems[1361]: Found loop4 Jul 2 06:56:51.649989 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 06:56:51.654730 extend-filesystems[1361]: Found loop5 Jul 2 06:56:51.654730 extend-filesystems[1361]: Found sr0 Jul 2 06:56:51.654730 extend-filesystems[1361]: Found vda Jul 2 06:56:51.654730 extend-filesystems[1361]: Found vda1 Jul 2 06:56:51.654730 extend-filesystems[1361]: Found vda2 Jul 2 06:56:51.654730 extend-filesystems[1361]: Found vda3 Jul 2 06:56:51.654730 extend-filesystems[1361]: Found usr Jul 2 06:56:51.654730 extend-filesystems[1361]: Found vda4 Jul 2 06:56:51.654730 extend-filesystems[1361]: Found vda6 Jul 2 06:56:51.654730 extend-filesystems[1361]: Found vda7 Jul 2 06:56:51.654730 extend-filesystems[1361]: Found vda9 Jul 2 06:56:51.654730 extend-filesystems[1361]: Checking size of /dev/vda9 Jul 2 06:56:51.689354 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1183) Jul 2 06:56:51.689451 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 06:56:51.651055 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 06:56:51.689568 extend-filesystems[1361]: Resized partition /dev/vda9 Jul 2 06:56:51.664949 dbus-daemon[1359]: [system] SELinux support is enabled Jul 2 06:56:51.747628 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 06:56:51.747717 update_engine[1377]: I0702 06:56:51.661880 1377 main.cc:92] Flatcar Update Engine starting Jul 2 06:56:51.747717 update_engine[1377]: I0702 06:56:51.667282 1377 update_check_scheduler.cc:74] Next update check in 6m55s Jul 2 06:56:51.655473 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 06:56:51.748084 extend-filesystems[1389]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 06:56:51.748084 extend-filesystems[1389]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 06:56:51.748084 extend-filesystems[1389]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 06:56:51.748084 extend-filesystems[1389]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 06:56:51.754929 jq[1380]: true Jul 2 06:56:51.667626 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 06:56:51.755869 extend-filesystems[1361]: Resized filesystem in /dev/vda9 Jul 2 06:56:51.672826 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 06:56:51.757315 tar[1390]: linux-amd64/helm Jul 2 06:56:51.673049 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 06:56:51.757778 jq[1392]: true Jul 2 06:56:51.673261 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 06:56:51.673483 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 06:56:51.687706 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 06:56:51.687950 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 06:56:51.709665 systemd[1]: Started update-engine.service - Update Engine. Jul 2 06:56:51.711338 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 06:56:51.711360 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 06:56:51.712816 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 06:56:51.712835 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 06:56:51.714934 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 06:56:51.716266 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 06:56:51.734404 systemd-logind[1375]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 06:56:51.734426 systemd-logind[1375]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 06:56:51.735540 systemd-logind[1375]: New seat seat0. Jul 2 06:56:51.736844 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 06:56:51.737133 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 06:56:51.743032 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 06:56:51.760750 locksmithd[1402]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 06:56:51.767639 bash[1411]: Updated "/home/core/.ssh/authorized_keys" Jul 2 06:56:51.768350 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 06:56:51.770825 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 06:56:51.878851 containerd[1393]: time="2024-07-02T06:56:51.878735375Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jul 2 06:56:51.908266 containerd[1393]: time="2024-07-02T06:56:51.908205602Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 06:56:51.908266 containerd[1393]: time="2024-07-02T06:56:51.908268691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:56:51.908524 systemd-networkd[1177]: eth0: Gained IPv6LL Jul 2 06:56:51.910720 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 06:56:51.911034 containerd[1393]: time="2024-07-02T06:56:51.910858488Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:56:51.911034 containerd[1393]: time="2024-07-02T06:56:51.910884597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:56:51.911189 containerd[1393]: time="2024-07-02T06:56:51.911163079Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:56:51.911189 containerd[1393]: time="2024-07-02T06:56:51.911186222Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 06:56:51.911667 containerd[1393]: time="2024-07-02T06:56:51.911257957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 06:56:51.911667 containerd[1393]: time="2024-07-02T06:56:51.911296429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:56:51.911667 containerd[1393]: time="2024-07-02T06:56:51.911307359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 06:56:51.911667 containerd[1393]: time="2024-07-02T06:56:51.911401396Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:56:51.911667 containerd[1393]: time="2024-07-02T06:56:51.911582806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 06:56:51.911667 containerd[1393]: time="2024-07-02T06:56:51.911597403Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 06:56:51.911667 containerd[1393]: time="2024-07-02T06:56:51.911606050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:56:51.911859 containerd[1393]: time="2024-07-02T06:56:51.911722568Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:56:51.911859 containerd[1393]: time="2024-07-02T06:56:51.911736224Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 06:56:51.911859 containerd[1393]: time="2024-07-02T06:56:51.911776379Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 06:56:51.911859 containerd[1393]: time="2024-07-02T06:56:51.911785566Z" level=info msg="metadata content store policy set" policy=shared Jul 2 06:56:51.912445 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 06:56:51.921246 containerd[1393]: time="2024-07-02T06:56:51.920047170Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 06:56:51.921246 containerd[1393]: time="2024-07-02T06:56:51.920083638Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 06:56:51.921246 containerd[1393]: time="2024-07-02T06:56:51.920095741Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 06:56:51.921246 containerd[1393]: time="2024-07-02T06:56:51.920128182Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 06:56:51.921246 containerd[1393]: time="2024-07-02T06:56:51.920143961Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 06:56:51.921246 containerd[1393]: time="2024-07-02T06:56:51.920155333Z" level=info msg="NRI interface is disabled by configuration." Jul 2 06:56:51.921246 containerd[1393]: time="2024-07-02T06:56:51.920167766Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 06:56:51.921246 containerd[1393]: time="2024-07-02T06:56:51.920281950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 06:56:51.921246 containerd[1393]: time="2024-07-02T06:56:51.920296537Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 06:56:51.921246 containerd[1393]: time="2024-07-02T06:56:51.920308670Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 06:56:51.921246 containerd[1393]: time="2024-07-02T06:56:51.920321795Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 06:56:51.921246 containerd[1393]: time="2024-07-02T06:56:51.920336332Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 06:56:51.921246 containerd[1393]: time="2024-07-02T06:56:51.920354045Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 06:56:51.920594 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.920366438Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.921763679Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.921782674Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.921823791Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.921841264Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.921856102Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.921962131Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.922384513Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.922419659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.922438835Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.922467308Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.922534695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.922553770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924024 containerd[1393]: time="2024-07-02T06:56:51.922569750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.923489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922586211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922602191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922618953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922634191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922648157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922666332Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922817094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922838725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922857811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922873320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922888759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922905390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922919266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924512 containerd[1393]: time="2024-07-02T06:56:51.922932240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 06:56:51.924932 containerd[1393]: time="2024-07-02T06:56:51.923228536Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 06:56:51.924932 containerd[1393]: time="2024-07-02T06:56:51.923305580Z" level=info msg="Connect containerd service" Jul 2 06:56:51.924932 containerd[1393]: time="2024-07-02T06:56:51.923341929Z" level=info msg="using legacy CRI server" Jul 2 06:56:51.924932 containerd[1393]: time="2024-07-02T06:56:51.923350565Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 06:56:51.924932 containerd[1393]: time="2024-07-02T06:56:51.923394647Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 06:56:51.924932 containerd[1393]: time="2024-07-02T06:56:51.923897340Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 06:56:51.925850 containerd[1393]: time="2024-07-02T06:56:51.925816149Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 06:56:51.925850 containerd[1393]: time="2024-07-02T06:56:51.925839873Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 06:56:51.925850 containerd[1393]: time="2024-07-02T06:56:51.925853218Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 06:56:51.926004 containerd[1393]: time="2024-07-02T06:56:51.925865080Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jul 2 06:56:51.926244 containerd[1393]: time="2024-07-02T06:56:51.926216269Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 06:56:51.926284 containerd[1393]: time="2024-07-02T06:56:51.926273316Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 06:56:51.928125 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 06:56:51.939068 containerd[1393]: time="2024-07-02T06:56:51.938981208Z" level=info msg="Start subscribing containerd event" Jul 2 06:56:51.939151 containerd[1393]: time="2024-07-02T06:56:51.939101313Z" level=info msg="Start recovering state" Jul 2 06:56:51.939217 containerd[1393]: time="2024-07-02T06:56:51.939198485Z" level=info msg="Start event monitor" Jul 2 06:56:51.939255 containerd[1393]: time="2024-07-02T06:56:51.939229804Z" level=info msg="Start snapshots syncer" Jul 2 06:56:51.939255 containerd[1393]: time="2024-07-02T06:56:51.939250893Z" level=info msg="Start cni network conf syncer for default" Jul 2 06:56:51.939291 containerd[1393]: time="2024-07-02T06:56:51.939262986Z" level=info msg="Start streaming server" Jul 2 06:56:51.939455 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 06:56:51.940455 containerd[1393]: time="2024-07-02T06:56:51.939386999Z" level=info msg="containerd successfully booted in 0.061812s" Jul 2 06:56:51.948923 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 06:56:51.951586 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 06:56:51.951855 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 06:56:51.953557 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 06:56:52.163813 tar[1390]: linux-amd64/LICENSE Jul 2 06:56:52.164018 tar[1390]: linux-amd64/README.md Jul 2 06:56:52.176283 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 06:56:52.398312 sshd_keygen[1383]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 06:56:52.421275 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 06:56:52.429736 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 06:56:52.437163 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 06:56:52.437439 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 06:56:52.446884 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 06:56:52.456214 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 06:56:52.465697 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 06:56:52.468189 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 06:56:52.469758 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 06:56:52.540973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:56:52.543013 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 06:56:52.546046 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jul 2 06:56:52.553599 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 06:56:52.553826 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jul 2 06:56:52.557462 systemd[1]: Startup finished in 5.245s (kernel) + 3.526s (userspace) = 8.772s. Jul 2 06:56:53.042645 kubelet[1474]: E0702 06:56:53.042554 1474 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:56:53.044506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:56:53.044645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:56:57.526067 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 06:56:57.537746 systemd[1]: Started sshd@0-10.0.0.85:22-10.0.0.1:43950.service - OpenSSH per-connection server daemon (10.0.0.1:43950). Jul 2 06:56:57.568733 sshd[1485]: Accepted publickey for core from 10.0.0.1 port 43950 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:56:57.570408 sshd[1485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:57.578085 systemd-logind[1375]: New session 1 of user core. Jul 2 06:56:57.578876 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 06:56:57.587611 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 06:56:57.597162 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 06:56:57.598905 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 06:56:57.601787 (systemd)[1489]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:57.669674 systemd[1489]: Queued start job for default target default.target. Jul 2 06:56:57.669861 systemd[1489]: Reached target paths.target - Paths. Jul 2 06:56:57.669876 systemd[1489]: Reached target sockets.target - Sockets. Jul 2 06:56:57.669887 systemd[1489]: Reached target timers.target - Timers. Jul 2 06:56:57.669897 systemd[1489]: Reached target basic.target - Basic System. Jul 2 06:56:57.669933 systemd[1489]: Reached target default.target - Main User Target. Jul 2 06:56:57.669954 systemd[1489]: Startup finished in 63ms. Jul 2 06:56:57.670033 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 06:56:57.680561 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 06:56:57.737719 systemd[1]: Started sshd@1-10.0.0.85:22-10.0.0.1:43960.service - OpenSSH per-connection server daemon (10.0.0.1:43960). Jul 2 06:56:57.768205 sshd[1499]: Accepted publickey for core from 10.0.0.1 port 43960 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:56:57.769589 sshd[1499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:57.773329 systemd-logind[1375]: New session 2 of user core. Jul 2 06:56:57.779569 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 06:56:57.832955 sshd[1499]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:57.839687 systemd[1]: Started sshd@2-10.0.0.85:22-10.0.0.1:43972.service - OpenSSH per-connection server daemon (10.0.0.1:43972). Jul 2 06:56:57.840221 systemd[1]: sshd@1-10.0.0.85:22-10.0.0.1:43960.service: Deactivated successfully. Jul 2 06:56:57.840889 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 06:56:57.841455 systemd-logind[1375]: Session 2 logged out. Waiting for processes to exit. Jul 2 06:56:57.842224 systemd-logind[1375]: Removed session 2. Jul 2 06:56:57.864960 sshd[1505]: Accepted publickey for core from 10.0.0.1 port 43972 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:56:57.866546 sshd[1505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:57.870623 systemd-logind[1375]: New session 3 of user core. Jul 2 06:56:57.881656 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 06:56:57.932615 sshd[1505]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:57.944598 systemd[1]: Started sshd@3-10.0.0.85:22-10.0.0.1:43978.service - OpenSSH per-connection server daemon (10.0.0.1:43978). Jul 2 06:56:57.945094 systemd[1]: sshd@2-10.0.0.85:22-10.0.0.1:43972.service: Deactivated successfully. Jul 2 06:56:57.945599 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 06:56:57.946051 systemd-logind[1375]: Session 3 logged out. Waiting for processes to exit. Jul 2 06:56:57.946781 systemd-logind[1375]: Removed session 3. Jul 2 06:56:57.968607 sshd[1511]: Accepted publickey for core from 10.0.0.1 port 43978 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:56:57.969710 sshd[1511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:57.973009 systemd-logind[1375]: New session 4 of user core. Jul 2 06:56:57.982551 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 06:56:58.036113 sshd[1511]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:58.044667 systemd[1]: Started sshd@4-10.0.0.85:22-10.0.0.1:43990.service - OpenSSH per-connection server daemon (10.0.0.1:43990). Jul 2 06:56:58.045171 systemd[1]: sshd@3-10.0.0.85:22-10.0.0.1:43978.service: Deactivated successfully. Jul 2 06:56:58.045782 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 06:56:58.046327 systemd-logind[1375]: Session 4 logged out. Waiting for processes to exit. Jul 2 06:56:58.047160 systemd-logind[1375]: Removed session 4. Jul 2 06:56:58.068810 sshd[1519]: Accepted publickey for core from 10.0.0.1 port 43990 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:56:58.069617 sshd[1519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:58.073189 systemd-logind[1375]: New session 5 of user core. Jul 2 06:56:58.086577 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 06:56:58.144679 sudo[1524]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 06:56:58.144880 sudo[1524]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:56:58.164465 sudo[1524]: pam_unix(sudo:session): session closed for user root Jul 2 06:56:58.165676 sshd[1519]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:58.181763 systemd[1]: Started sshd@5-10.0.0.85:22-10.0.0.1:44002.service - OpenSSH per-connection server daemon (10.0.0.1:44002). Jul 2 06:56:58.182214 systemd[1]: sshd@4-10.0.0.85:22-10.0.0.1:43990.service: Deactivated successfully. Jul 2 06:56:58.183181 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 06:56:58.183195 systemd-logind[1375]: Session 5 logged out. Waiting for processes to exit. Jul 2 06:56:58.184045 systemd-logind[1375]: Removed session 5. Jul 2 06:56:58.208863 sshd[1526]: Accepted publickey for core from 10.0.0.1 port 44002 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:56:58.210043 sshd[1526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:58.213295 systemd-logind[1375]: New session 6 of user core. Jul 2 06:56:58.222542 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 06:56:58.275891 sudo[1533]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 06:56:58.276110 sudo[1533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:56:58.279311 sudo[1533]: pam_unix(sudo:session): session closed for user root Jul 2 06:56:58.284524 sudo[1532]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 06:56:58.284837 sudo[1532]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:56:58.306755 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 06:56:58.306000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 06:56:58.308252 auditctl[1536]: No rules Jul 2 06:56:58.308582 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 06:56:58.308799 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 06:56:58.310494 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 06:56:58.329325 augenrules[1554]: No rules Jul 2 06:56:58.329926 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 06:56:58.330754 sudo[1532]: pam_unix(sudo:session): session closed for user root Jul 2 06:56:58.423267 sshd[1526]: pam_unix(sshd:session): session closed for user core Jul 2 06:56:58.423471 kernel: kauditd_printk_skb: 62 callbacks suppressed Jul 2 06:56:58.423502 kernel: audit: type=1305 audit(1719903418.306:134): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 06:56:58.306000 audit[1536]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc53713d0 a2=420 a3=0 items=0 ppid=1 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:58.426047 systemd[1]: sshd@5-10.0.0.85:22-10.0.0.1:44002.service: Deactivated successfully. Jul 2 06:56:58.427009 systemd-logind[1375]: Session 6 logged out. Waiting for processes to exit. Jul 2 06:56:58.428399 systemd[1]: Started sshd@6-10.0.0.85:22-10.0.0.1:44016.service - OpenSSH per-connection server daemon (10.0.0.1:44016). Jul 2 06:56:58.428709 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 06:56:58.429647 systemd-logind[1375]: Removed session 6. Jul 2 06:56:58.475816 kernel: audit: type=1300 audit(1719903418.306:134): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc53713d0 a2=420 a3=0 items=0 ppid=1 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:58.475866 kernel: audit: type=1327 audit(1719903418.306:134): proctitle=2F7362696E2F617564697463746C002D44 Jul 2 06:56:58.306000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 2 06:56:58.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.479241 kernel: audit: type=1131 audit(1719903418.306:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.479348 kernel: audit: type=1130 audit(1719903418.326:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.481772 kernel: audit: type=1106 audit(1719903418.326:137): pid=1532 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.326000 audit[1532]: USER_END pid=1532 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.484512 kernel: audit: type=1104 audit(1719903418.326:138): pid=1532 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.326000 audit[1532]: CRED_DISP pid=1532 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.486936 kernel: audit: type=1106 audit(1719903418.422:139): pid=1526 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:56:58.422000 audit[1526]: USER_END pid=1526 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:56:58.490138 kernel: audit: type=1104 audit(1719903418.422:140): pid=1526 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:56:58.422000 audit[1526]: CRED_DISP pid=1526 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:56:58.491516 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 44016 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:56:58.492572 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:56:58.492840 kernel: audit: type=1131 audit(1719903418.424:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.85:22-10.0.0.1:44002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.85:22-10.0.0.1:44002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:44016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.489000 audit[1561]: USER_ACCT pid=1561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:56:58.489000 audit[1561]: CRED_ACQ pid=1561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:56:58.489000 audit[1561]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd2c21f00 a2=3 a3=7fb624b3d480 items=0 ppid=1 pid=1561 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:56:58.489000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:56:58.496071 systemd-logind[1375]: New session 7 of user core. Jul 2 06:56:58.496864 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 06:56:58.499000 audit[1561]: USER_START pid=1561 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:56:58.500000 audit[1564]: CRED_ACQ pid=1564 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:56:58.548000 audit[1565]: USER_ACCT pid=1565 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.550007 sudo[1565]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 06:56:58.548000 audit[1565]: CRED_REFR pid=1565 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.550269 sudo[1565]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:56:58.550000 audit[1565]: USER_START pid=1565 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:56:58.637656 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 06:56:58.863315 dockerd[1575]: time="2024-07-02T06:56:58.863255604Z" level=info msg="Starting up" Jul 2 06:57:00.752142 dockerd[1575]: time="2024-07-02T06:57:00.752094491Z" level=info msg="Loading containers: start." Jul 2 06:57:00.796000 audit[1611]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:00.796000 audit[1611]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd38d23c10 a2=0 a3=7f5ba9616e90 items=0 ppid=1575 pid=1611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:00.796000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 2 06:57:00.797000 audit[1613]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:00.797000 audit[1613]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe4febdde0 a2=0 a3=7f3dadb72e90 items=0 ppid=1575 pid=1613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:00.797000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 2 06:57:00.799000 audit[1615]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:00.799000 audit[1615]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe3fe6e950 a2=0 a3=7f72c2836e90 items=0 ppid=1575 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:00.799000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 06:57:00.800000 audit[1617]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:00.800000 audit[1617]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff12010620 a2=0 a3=7fadec52be90 items=0 ppid=1575 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:00.800000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 06:57:00.802000 audit[1619]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1619 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:00.802000 audit[1619]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcbbff0650 a2=0 a3=7f7dc79e5e90 items=0 ppid=1575 pid=1619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:00.802000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 2 06:57:00.803000 audit[1621]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:00.803000 audit[1621]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffceec04810 a2=0 a3=7fa1c870ae90 items=0 ppid=1575 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:00.803000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 2 06:57:01.207000 audit[1623]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1623 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.207000 audit[1623]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff6c8a5b80 a2=0 a3=7fb5537a5e90 items=0 ppid=1575 pid=1623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.207000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 2 06:57:01.209000 audit[1625]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1625 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.209000 audit[1625]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffbbc53010 a2=0 a3=7f3ecbf75e90 items=0 ppid=1575 pid=1625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.209000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 2 06:57:01.211000 audit[1627]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1627 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.211000 audit[1627]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7fff9d848970 a2=0 a3=7fa0d6e74e90 items=0 ppid=1575 pid=1627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.211000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:57:01.348000 audit[1631]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1631 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.348000 audit[1631]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffccfb32630 a2=0 a3=7fdbee62de90 items=0 ppid=1575 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.348000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:57:01.349000 audit[1632]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1632 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.349000 audit[1632]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe63d73bf0 a2=0 a3=7f60406dde90 items=0 ppid=1575 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.349000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:57:01.358398 kernel: Initializing XFRM netlink socket Jul 2 06:57:01.388000 audit[1641]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1641 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.388000 audit[1641]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffedadce170 a2=0 a3=7f529f04be90 items=0 ppid=1575 pid=1641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.388000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 2 06:57:01.400000 audit[1644]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1644 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.400000 audit[1644]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffdd26f6c20 a2=0 a3=7f20f1630e90 items=0 ppid=1575 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.400000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 2 06:57:01.404000 audit[1648]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1648 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.404000 audit[1648]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff26642bd0 a2=0 a3=7fdebed77e90 items=0 ppid=1575 pid=1648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.404000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 2 06:57:01.406000 audit[1650]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1650 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.406000 audit[1650]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffed28be10 a2=0 a3=7feaa89bee90 items=0 ppid=1575 pid=1650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.406000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 2 06:57:01.408000 audit[1652]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1652 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.408000 audit[1652]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffcb21dacb0 a2=0 a3=7ff521bd3e90 items=0 ppid=1575 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.408000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 2 06:57:01.410000 audit[1654]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1654 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.410000 audit[1654]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe50d81250 a2=0 a3=7f0ff2966e90 items=0 ppid=1575 pid=1654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.410000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 2 06:57:01.411000 audit[1656]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1656 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.411000 audit[1656]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7fff9c8f10a0 a2=0 a3=7f0306cdbe90 items=0 ppid=1575 pid=1656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.411000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 2 06:57:01.416000 audit[1659]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1659 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.416000 audit[1659]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffe120e8200 a2=0 a3=7fbc7f496e90 items=0 ppid=1575 pid=1659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.416000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 2 06:57:01.418000 audit[1661]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1661 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.418000 audit[1661]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffed8ef5a80 a2=0 a3=7f830a71ce90 items=0 ppid=1575 pid=1661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.418000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 06:57:01.421000 audit[1663]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1663 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.421000 audit[1663]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffea8d21b90 a2=0 a3=7ffaee92ee90 items=0 ppid=1575 pid=1663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.421000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 06:57:01.424000 audit[1665]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.424000 audit[1665]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc2364c9d0 a2=0 a3=7fedc77a7e90 items=0 ppid=1575 pid=1665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.424000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 2 06:57:01.426158 systemd-networkd[1177]: docker0: Link UP Jul 2 06:57:01.437000 audit[1669]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1669 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.437000 audit[1669]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdc832e6b0 a2=0 a3=7f750ea9ee90 items=0 ppid=1575 pid=1669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.437000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:57:01.438000 audit[1670]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1670 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:01.438000 audit[1670]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd172a2f90 a2=0 a3=7f1b009c3e90 items=0 ppid=1575 pid=1670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:01.438000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:57:01.440671 dockerd[1575]: time="2024-07-02T06:57:01.440628408Z" level=info msg="Loading containers: done." Jul 2 06:57:01.494801 dockerd[1575]: time="2024-07-02T06:57:01.494654246Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 06:57:01.494973 dockerd[1575]: time="2024-07-02T06:57:01.494843260Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 06:57:01.494973 dockerd[1575]: time="2024-07-02T06:57:01.494932537Z" level=info msg="Daemon has completed initialization" Jul 2 06:57:01.531153 dockerd[1575]: time="2024-07-02T06:57:01.531063977Z" level=info msg="API listen on /run/docker.sock" Jul 2 06:57:01.531316 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 06:57:01.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:02.161346 containerd[1393]: time="2024-07-02T06:57:02.161299063Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 06:57:03.295569 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 06:57:03.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:03.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:03.295744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:57:03.311758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:57:03.407170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:57:03.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:03.411120 kernel: kauditd_printk_skb: 86 callbacks suppressed Jul 2 06:57:03.411261 kernel: audit: type=1130 audit(1719903423.406:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:03.490854 kubelet[1728]: E0702 06:57:03.490778 1728 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:57:03.494666 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:57:03.494817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:57:03.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:57:03.499405 kernel: audit: type=1131 audit(1719903423.493:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:57:03.729295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount631164788.mount: Deactivated successfully. Jul 2 06:57:04.991656 containerd[1393]: time="2024-07-02T06:57:04.991552658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:04.993813 containerd[1393]: time="2024-07-02T06:57:04.993724281Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jul 2 06:57:04.995887 containerd[1393]: time="2024-07-02T06:57:04.995722629Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:04.998491 containerd[1393]: time="2024-07-02T06:57:04.998388217Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:05.000325 containerd[1393]: time="2024-07-02T06:57:05.000280787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:05.001920 containerd[1393]: time="2024-07-02T06:57:05.001814924Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 2.840455608s" Jul 2 06:57:05.001920 containerd[1393]: time="2024-07-02T06:57:05.001915803Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 06:57:05.030218 containerd[1393]: time="2024-07-02T06:57:05.030159710Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 06:57:06.918251 containerd[1393]: time="2024-07-02T06:57:06.918187156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:06.919007 containerd[1393]: time="2024-07-02T06:57:06.918946510Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jul 2 06:57:06.920231 containerd[1393]: time="2024-07-02T06:57:06.920197696Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:06.922351 containerd[1393]: time="2024-07-02T06:57:06.922310058Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:06.924135 containerd[1393]: time="2024-07-02T06:57:06.924112418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:06.926050 containerd[1393]: time="2024-07-02T06:57:06.926010768Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 1.895792848s" Jul 2 06:57:06.926102 containerd[1393]: time="2024-07-02T06:57:06.926057906Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 06:57:06.948988 containerd[1393]: time="2024-07-02T06:57:06.948941952Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 06:57:08.356281 containerd[1393]: time="2024-07-02T06:57:08.356216133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:08.356870 containerd[1393]: time="2024-07-02T06:57:08.356807432Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jul 2 06:57:08.357980 containerd[1393]: time="2024-07-02T06:57:08.357953872Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:08.359734 containerd[1393]: time="2024-07-02T06:57:08.359698584Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:08.361638 containerd[1393]: time="2024-07-02T06:57:08.361606522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:08.362716 containerd[1393]: time="2024-07-02T06:57:08.362677140Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.413691957s" Jul 2 06:57:08.362774 containerd[1393]: time="2024-07-02T06:57:08.362713909Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 06:57:08.386332 containerd[1393]: time="2024-07-02T06:57:08.386302125Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 06:57:09.979580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount672065462.mount: Deactivated successfully. Jul 2 06:57:10.536825 containerd[1393]: time="2024-07-02T06:57:10.536747258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:10.537878 containerd[1393]: time="2024-07-02T06:57:10.537830679Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jul 2 06:57:10.539394 containerd[1393]: time="2024-07-02T06:57:10.539356270Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:10.541357 containerd[1393]: time="2024-07-02T06:57:10.541327617Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:10.543049 containerd[1393]: time="2024-07-02T06:57:10.542997289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:10.543694 containerd[1393]: time="2024-07-02T06:57:10.543645344Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.157299947s" Jul 2 06:57:10.543694 containerd[1393]: time="2024-07-02T06:57:10.543686451Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 06:57:10.563212 containerd[1393]: time="2024-07-02T06:57:10.563174799Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 06:57:11.663365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount292477064.mount: Deactivated successfully. Jul 2 06:57:11.675966 containerd[1393]: time="2024-07-02T06:57:11.675884917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:11.677047 containerd[1393]: time="2024-07-02T06:57:11.676975933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 06:57:11.678430 containerd[1393]: time="2024-07-02T06:57:11.678394975Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:11.727214 containerd[1393]: time="2024-07-02T06:57:11.727152883Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:11.787411 containerd[1393]: time="2024-07-02T06:57:11.787327642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:11.788199 containerd[1393]: time="2024-07-02T06:57:11.788141529Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.224735677s" Jul 2 06:57:11.788199 containerd[1393]: time="2024-07-02T06:57:11.788191512Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 06:57:11.810277 containerd[1393]: time="2024-07-02T06:57:11.810229882Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 06:57:12.740906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2126182651.mount: Deactivated successfully. Jul 2 06:57:13.747563 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 06:57:13.747773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:57:13.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:13.754869 kernel: audit: type=1130 audit(1719903433.746:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:13.754957 kernel: audit: type=1131 audit(1719903433.746:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:13.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:13.757936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:57:13.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:13.894892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:57:13.899392 kernel: audit: type=1130 audit(1719903433.893:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:13.966989 kubelet[1893]: E0702 06:57:13.966937 1893 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:57:13.968893 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:57:13.969026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:57:13.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:57:13.973391 kernel: audit: type=1131 audit(1719903433.967:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:57:14.920953 containerd[1393]: time="2024-07-02T06:57:14.920893727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:14.921699 containerd[1393]: time="2024-07-02T06:57:14.921650076Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jul 2 06:57:14.926071 containerd[1393]: time="2024-07-02T06:57:14.926015673Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:14.927893 containerd[1393]: time="2024-07-02T06:57:14.927859480Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:14.930740 containerd[1393]: time="2024-07-02T06:57:14.930709726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:14.931813 containerd[1393]: time="2024-07-02T06:57:14.931779342Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.121497452s" Jul 2 06:57:14.931855 containerd[1393]: time="2024-07-02T06:57:14.931818575Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 06:57:14.961462 containerd[1393]: time="2024-07-02T06:57:14.961412615Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 06:57:15.562707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334755538.mount: Deactivated successfully. Jul 2 06:57:16.109754 containerd[1393]: time="2024-07-02T06:57:16.109703650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:16.110617 containerd[1393]: time="2024-07-02T06:57:16.110550368Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jul 2 06:57:16.111698 containerd[1393]: time="2024-07-02T06:57:16.111670058Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:16.112985 containerd[1393]: time="2024-07-02T06:57:16.112950890Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:16.114433 containerd[1393]: time="2024-07-02T06:57:16.114404155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:16.115054 containerd[1393]: time="2024-07-02T06:57:16.115024569Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.153566028s" Jul 2 06:57:16.115089 containerd[1393]: time="2024-07-02T06:57:16.115054986Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 06:57:19.140541 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:57:19.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:19.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:19.147910 kernel: audit: type=1130 audit(1719903439.139:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:19.147960 kernel: audit: type=1131 audit(1719903439.140:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:19.155770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:57:19.174625 systemd[1]: Reloading. Jul 2 06:57:19.645312 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:57:19.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:19.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:19.742682 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:57:19.743174 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 06:57:19.743478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:57:19.745540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:57:19.747343 kernel: audit: type=1130 audit(1719903439.740:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:19.747430 kernel: audit: type=1131 audit(1719903439.742:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:19.836993 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:57:19.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:19.848401 kernel: audit: type=1130 audit(1719903439.835:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:19.877569 kubelet[2073]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:57:19.877569 kubelet[2073]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 06:57:19.877569 kubelet[2073]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:57:19.877980 kubelet[2073]: I0702 06:57:19.877603 2073 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 06:57:20.286168 kubelet[2073]: I0702 06:57:20.286108 2073 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 06:57:20.286168 kubelet[2073]: I0702 06:57:20.286144 2073 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 06:57:20.286445 kubelet[2073]: I0702 06:57:20.286414 2073 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 06:57:20.301138 kubelet[2073]: E0702 06:57:20.301093 2073 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:20.302675 kubelet[2073]: I0702 06:57:20.302658 2073 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 06:57:20.322726 kubelet[2073]: I0702 06:57:20.322696 2073 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 06:57:20.325559 kubelet[2073]: I0702 06:57:20.325541 2073 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 06:57:20.325700 kubelet[2073]: I0702 06:57:20.325687 2073 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 06:57:20.325822 kubelet[2073]: I0702 06:57:20.325705 2073 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 06:57:20.325822 kubelet[2073]: I0702 06:57:20.325712 2073 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 06:57:20.329974 kubelet[2073]: I0702 06:57:20.329954 2073 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:57:20.332546 kubelet[2073]: I0702 06:57:20.332530 2073 kubelet.go:393] "Attempting to sync node with API server" Jul 2 06:57:20.332577 kubelet[2073]: I0702 06:57:20.332549 2073 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 06:57:20.332577 kubelet[2073]: I0702 06:57:20.332569 2073 kubelet.go:309] "Adding apiserver pod source" Jul 2 06:57:20.332620 kubelet[2073]: I0702 06:57:20.332578 2073 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 06:57:20.333144 kubelet[2073]: W0702 06:57:20.333087 2073 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:20.333144 kubelet[2073]: E0702 06:57:20.333132 2073 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:20.335073 kubelet[2073]: W0702 06:57:20.335016 2073 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:20.335122 kubelet[2073]: E0702 06:57:20.335076 2073 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:20.335231 kubelet[2073]: I0702 06:57:20.335210 2073 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 06:57:20.343187 kubelet[2073]: W0702 06:57:20.343159 2073 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 06:57:20.343758 kubelet[2073]: I0702 06:57:20.343742 2073 server.go:1232] "Started kubelet" Jul 2 06:57:20.343922 kubelet[2073]: I0702 06:57:20.343902 2073 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 06:57:20.344186 kubelet[2073]: I0702 06:57:20.344158 2073 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 06:57:20.344228 kubelet[2073]: I0702 06:57:20.344200 2073 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 06:57:20.344726 kubelet[2073]: I0702 06:57:20.344706 2073 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 06:57:20.344911 kubelet[2073]: I0702 06:57:20.344894 2073 server.go:462] "Adding debug handlers to kubelet server" Jul 2 06:57:20.346000 audit[2085]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:20.349400 kubelet[2073]: E0702 06:57:20.349273 2073 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"localhost.17de531109a8e3fa", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"localhost", UID:"localhost", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"localhost"}, FirstTimestamp:time.Date(2024, time.July, 2, 6, 57, 20, 343720954, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 6, 57, 20, 343720954, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"localhost"}': 'Post "https://10.0.0.85:6443/api/v1/namespaces/default/events": dial tcp 10.0.0.85:6443: connect: connection refused'(may retry after sleeping) Jul 2 06:57:20.349653 kubelet[2073]: E0702 06:57:20.349633 2073 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:57:20.349689 kubelet[2073]: I0702 06:57:20.349661 2073 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 06:57:20.349737 kubelet[2073]: I0702 06:57:20.349721 2073 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 06:57:20.349788 kubelet[2073]: I0702 06:57:20.349771 2073 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 06:57:20.350028 kubelet[2073]: W0702 06:57:20.349984 2073 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:20.350086 kubelet[2073]: E0702 06:57:20.350031 2073 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:20.346000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff42216dd0 a2=0 a3=7f249bc22e90 items=0 ppid=2073 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.351323 kubelet[2073]: E0702 06:57:20.351289 2073 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 06:57:20.351391 kubelet[2073]: E0702 06:57:20.351331 2073 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 06:57:20.353621 kubelet[2073]: E0702 06:57:20.353607 2073 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="200ms" Jul 2 06:57:20.355340 kernel: audit: type=1325 audit(1719903440.346:189): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:20.355430 kernel: audit: type=1300 audit(1719903440.346:189): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff42216dd0 a2=0 a3=7f249bc22e90 items=0 ppid=2073 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.355452 kernel: audit: type=1327 audit(1719903440.346:189): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 06:57:20.346000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 06:57:20.349000 audit[2086]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:20.359447 kernel: audit: type=1325 audit(1719903440.349:190): table=filter:27 family=2 entries=1 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:20.359499 kernel: audit: type=1300 audit(1719903440.349:190): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddc6b3c80 a2=0 a3=7efc3d810e90 items=0 ppid=2073 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.349000 audit[2086]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddc6b3c80 a2=0 a3=7efc3d810e90 items=0 ppid=2073 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 06:57:20.351000 audit[2088]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2088 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:20.351000 audit[2088]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe112d5c70 a2=0 a3=7fef54f9ee90 items=0 ppid=2073 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.351000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 06:57:20.353000 audit[2090]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:20.353000 audit[2090]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffeb6efb1c0 a2=0 a3=7ffbc9e44e90 items=0 ppid=2073 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.353000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 06:57:20.364000 audit[2093]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:20.364000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcdeb61760 a2=0 a3=7f5b2b6e7e90 items=0 ppid=2073 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.364000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 2 06:57:20.366016 kubelet[2073]: I0702 06:57:20.365991 2073 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 06:57:20.365000 audit[2094]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:20.365000 audit[2094]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffce6b33c00 a2=0 a3=7fd93ca9de90 items=0 ppid=2073 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.365000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 06:57:20.367361 kubelet[2073]: I0702 06:57:20.367233 2073 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 06:57:20.367361 kubelet[2073]: I0702 06:57:20.367261 2073 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 06:57:20.367361 kubelet[2073]: I0702 06:57:20.367283 2073 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 06:57:20.367361 kubelet[2073]: E0702 06:57:20.367351 2073 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 06:57:20.367000 audit[2096]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:20.367000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe9ae21b40 a2=0 a3=7f15db99de90 items=0 ppid=2073 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.367000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 06:57:20.368000 audit[2097]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:20.368000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff29a5df30 a2=0 a3=7f45b194de90 items=0 ppid=2073 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.368000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 06:57:20.369000 audit[2098]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=2098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:20.369000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcda642cc0 a2=0 a3=7fc66b68de90 items=0 ppid=2073 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.369000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 06:57:20.370000 audit[2099]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:20.370000 audit[2099]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc85b5d270 a2=0 a3=7f54624b5e90 items=0 ppid=2073 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.370000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 06:57:20.371000 audit[2100]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:20.371000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffcaa206360 a2=0 a3=7f6c96278e90 items=0 ppid=2073 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.371000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 06:57:20.385000 audit[2101]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:20.385000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcb7afef30 a2=0 a3=7f60b268ce90 items=0 ppid=2073 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:20.385000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 06:57:20.388981 kubelet[2073]: W0702 06:57:20.388608 2073 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:20.388981 kubelet[2073]: E0702 06:57:20.388652 2073 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:20.407355 kubelet[2073]: I0702 06:57:20.407326 2073 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 06:57:20.407355 kubelet[2073]: I0702 06:57:20.407346 2073 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 06:57:20.407355 kubelet[2073]: I0702 06:57:20.407361 2073 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:57:20.451778 kubelet[2073]: I0702 06:57:20.451737 2073 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 06:57:20.452195 kubelet[2073]: E0702 06:57:20.452156 2073 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jul 2 06:57:20.468197 kubelet[2073]: E0702 06:57:20.468180 2073 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 06:57:20.555310 kubelet[2073]: E0702 06:57:20.555195 2073 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="400ms" Jul 2 06:57:20.653967 kubelet[2073]: I0702 06:57:20.653933 2073 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 06:57:20.654327 kubelet[2073]: E0702 06:57:20.654305 2073 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jul 2 06:57:20.668493 kubelet[2073]: E0702 06:57:20.668436 2073 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 06:57:20.956605 kubelet[2073]: E0702 06:57:20.956547 2073 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="800ms" Jul 2 06:57:20.978725 kubelet[2073]: I0702 06:57:20.978666 2073 policy_none.go:49] "None policy: Start" Jul 2 06:57:20.979613 kubelet[2073]: I0702 06:57:20.979586 2073 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 06:57:20.979678 kubelet[2073]: I0702 06:57:20.979617 2073 state_mem.go:35] "Initializing new in-memory state store" Jul 2 06:57:20.984424 kubelet[2073]: I0702 06:57:20.984398 2073 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 06:57:20.984632 kubelet[2073]: I0702 06:57:20.984617 2073 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 06:57:20.986158 kubelet[2073]: E0702 06:57:20.986122 2073 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 06:57:21.056608 kubelet[2073]: I0702 06:57:21.056558 2073 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 06:57:21.057065 kubelet[2073]: E0702 06:57:21.057037 2073 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jul 2 06:57:21.069308 kubelet[2073]: I0702 06:57:21.069248 2073 topology_manager.go:215] "Topology Admit Handler" podUID="a2e51272d4546d1d1bcb9f774f6f1d0a" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 06:57:21.070652 kubelet[2073]: I0702 06:57:21.070622 2073 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 06:57:21.072038 kubelet[2073]: I0702 06:57:21.072023 2073 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 06:57:21.155289 kubelet[2073]: I0702 06:57:21.155229 2073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2e51272d4546d1d1bcb9f774f6f1d0a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a2e51272d4546d1d1bcb9f774f6f1d0a\") " pod="kube-system/kube-apiserver-localhost" Jul 2 06:57:21.155289 kubelet[2073]: I0702 06:57:21.155296 2073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:57:21.156195 kubelet[2073]: I0702 06:57:21.155331 2073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:57:21.156195 kubelet[2073]: I0702 06:57:21.155437 2073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2e51272d4546d1d1bcb9f774f6f1d0a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a2e51272d4546d1d1bcb9f774f6f1d0a\") " pod="kube-system/kube-apiserver-localhost" Jul 2 06:57:21.156195 kubelet[2073]: I0702 06:57:21.155568 2073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2e51272d4546d1d1bcb9f774f6f1d0a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a2e51272d4546d1d1bcb9f774f6f1d0a\") " pod="kube-system/kube-apiserver-localhost" Jul 2 06:57:21.156195 kubelet[2073]: I0702 06:57:21.155621 2073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:57:21.156195 kubelet[2073]: I0702 06:57:21.155647 2073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:57:21.156880 kubelet[2073]: I0702 06:57:21.155669 2073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:57:21.156880 kubelet[2073]: I0702 06:57:21.155728 2073 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 06:57:21.174031 kubelet[2073]: W0702 06:57:21.173910 2073 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:21.174031 kubelet[2073]: E0702 06:57:21.174016 2073 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.85:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:21.375057 kubelet[2073]: E0702 06:57:21.374904 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:21.375239 kubelet[2073]: E0702 06:57:21.375158 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:21.375782 containerd[1393]: time="2024-07-02T06:57:21.375716203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a2e51272d4546d1d1bcb9f774f6f1d0a,Namespace:kube-system,Attempt:0,}" Jul 2 06:57:21.376193 containerd[1393]: time="2024-07-02T06:57:21.375729047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,}" Jul 2 06:57:21.376250 kubelet[2073]: E0702 06:57:21.375914 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:21.376605 containerd[1393]: time="2024-07-02T06:57:21.376560326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,}" Jul 2 06:57:21.758264 kubelet[2073]: E0702 06:57:21.758142 2073 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.85:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.85:6443: connect: connection refused" interval="1.6s" Jul 2 06:57:21.835182 kubelet[2073]: W0702 06:57:21.835092 2073 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:21.835182 kubelet[2073]: E0702 06:57:21.835179 2073 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.85:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:21.858712 kubelet[2073]: I0702 06:57:21.858660 2073 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 06:57:21.859051 kubelet[2073]: E0702 06:57:21.859035 2073 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.0.0.85:6443/api/v1/nodes\": dial tcp 10.0.0.85:6443: connect: connection refused" node="localhost" Jul 2 06:57:21.893759 kubelet[2073]: W0702 06:57:21.893676 2073 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:21.893759 kubelet[2073]: E0702 06:57:21.893769 2073 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.85:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:21.945232 kubelet[2073]: W0702 06:57:21.945150 2073 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:21.945232 kubelet[2073]: E0702 06:57:21.945227 2073 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.85:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:22.298869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1139822554.mount: Deactivated successfully. Jul 2 06:57:22.309159 containerd[1393]: time="2024-07-02T06:57:22.309089431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:57:22.309915 containerd[1393]: time="2024-07-02T06:57:22.309843035Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 06:57:22.310884 containerd[1393]: time="2024-07-02T06:57:22.310853099Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:57:22.311837 containerd[1393]: time="2024-07-02T06:57:22.311762695Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 06:57:22.312797 containerd[1393]: time="2024-07-02T06:57:22.312761999Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:57:22.313874 containerd[1393]: time="2024-07-02T06:57:22.313838578Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:57:22.314762 containerd[1393]: time="2024-07-02T06:57:22.314698510Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 06:57:22.316743 containerd[1393]: time="2024-07-02T06:57:22.316696738Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:57:22.317991 containerd[1393]: time="2024-07-02T06:57:22.317929850Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:57:22.319903 containerd[1393]: time="2024-07-02T06:57:22.319850412Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:57:22.321940 containerd[1393]: time="2024-07-02T06:57:22.321898493Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:57:22.323190 containerd[1393]: time="2024-07-02T06:57:22.323132788Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 946.447056ms" Jul 2 06:57:22.323738 containerd[1393]: time="2024-07-02T06:57:22.323709509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 947.769486ms" Jul 2 06:57:22.324296 containerd[1393]: time="2024-07-02T06:57:22.324232991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:57:22.325635 containerd[1393]: time="2024-07-02T06:57:22.325579446Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:57:22.326255 containerd[1393]: time="2024-07-02T06:57:22.326221310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:57:22.328840 containerd[1393]: time="2024-07-02T06:57:22.328779758Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:57:22.329788 containerd[1393]: time="2024-07-02T06:57:22.329751200Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 953.89311ms" Jul 2 06:57:22.437039 kubelet[2073]: E0702 06:57:22.436999 2073 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.85:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.85:6443: connect: connection refused Jul 2 06:57:22.701492 containerd[1393]: time="2024-07-02T06:57:22.701391778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:57:22.702056 containerd[1393]: time="2024-07-02T06:57:22.702013274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:22.702056 containerd[1393]: time="2024-07-02T06:57:22.702036958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:57:22.702056 containerd[1393]: time="2024-07-02T06:57:22.702046967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:22.706905 containerd[1393]: time="2024-07-02T06:57:22.706783279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:57:22.706905 containerd[1393]: time="2024-07-02T06:57:22.706849023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:22.706905 containerd[1393]: time="2024-07-02T06:57:22.706872326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:57:22.706905 containerd[1393]: time="2024-07-02T06:57:22.706892143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:22.711761 containerd[1393]: time="2024-07-02T06:57:22.710911351Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:57:22.711761 containerd[1393]: time="2024-07-02T06:57:22.710975491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:22.711761 containerd[1393]: time="2024-07-02T06:57:22.710994867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:57:22.711761 containerd[1393]: time="2024-07-02T06:57:22.711008192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:22.788971 containerd[1393]: time="2024-07-02T06:57:22.788925305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d27baad490d2d4f748c86b318d7d74ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d6a4a93ca5471ec7b79e258762471dc7ed383634eb8ecbde6577c2720be3764\"" Jul 2 06:57:22.789989 containerd[1393]: time="2024-07-02T06:57:22.789953573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a2e51272d4546d1d1bcb9f774f6f1d0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc012fd0d7e0171c08e123a10b2293ac580f58d661ed5d47df53c4872c767417\"" Jul 2 06:57:22.790794 kubelet[2073]: E0702 06:57:22.790771 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:22.791590 kubelet[2073]: E0702 06:57:22.791568 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:22.792949 containerd[1393]: time="2024-07-02T06:57:22.792923403Z" level=info msg="CreateContainer within sandbox \"6d6a4a93ca5471ec7b79e258762471dc7ed383634eb8ecbde6577c2720be3764\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 06:57:22.794198 containerd[1393]: time="2024-07-02T06:57:22.794159601Z" level=info msg="CreateContainer within sandbox \"fc012fd0d7e0171c08e123a10b2293ac580f58d661ed5d47df53c4872c767417\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 06:57:22.795022 containerd[1393]: time="2024-07-02T06:57:22.794998635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:9c3207d669e00aa24ded52617c0d65d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"94411432fc54679cdfe1370b63313ec4a331fb9efbb0d37be5289b372b95adf8\"" Jul 2 06:57:22.795707 kubelet[2073]: E0702 06:57:22.795674 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:22.797823 containerd[1393]: time="2024-07-02T06:57:22.797802994Z" level=info msg="CreateContainer within sandbox \"94411432fc54679cdfe1370b63313ec4a331fb9efbb0d37be5289b372b95adf8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 06:57:22.813128 containerd[1393]: time="2024-07-02T06:57:22.813084532Z" level=info msg="CreateContainer within sandbox \"6d6a4a93ca5471ec7b79e258762471dc7ed383634eb8ecbde6577c2720be3764\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7151378f8b991a75d363a797bf90014455cecab0c16fad2a820a4532c31869df\"" Jul 2 06:57:22.813697 containerd[1393]: time="2024-07-02T06:57:22.813671833Z" level=info msg="StartContainer for \"7151378f8b991a75d363a797bf90014455cecab0c16fad2a820a4532c31869df\"" Jul 2 06:57:22.819249 containerd[1393]: time="2024-07-02T06:57:22.819207415Z" level=info msg="CreateContainer within sandbox \"fc012fd0d7e0171c08e123a10b2293ac580f58d661ed5d47df53c4872c767417\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0008e247c75fbb0757f2b1df79bf6efca36982166f3ea8a439fa11c215c3bf31\"" Jul 2 06:57:22.819717 containerd[1393]: time="2024-07-02T06:57:22.819684018Z" level=info msg="StartContainer for \"0008e247c75fbb0757f2b1df79bf6efca36982166f3ea8a439fa11c215c3bf31\"" Jul 2 06:57:22.821854 containerd[1393]: time="2024-07-02T06:57:22.821816017Z" level=info msg="CreateContainer within sandbox \"94411432fc54679cdfe1370b63313ec4a331fb9efbb0d37be5289b372b95adf8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"37ba2af1fc28c33fe42ac84f75c3320e08735f1e4c7385b75841dae20b2a0a46\"" Jul 2 06:57:22.822209 containerd[1393]: time="2024-07-02T06:57:22.822181963Z" level=info msg="StartContainer for \"37ba2af1fc28c33fe42ac84f75c3320e08735f1e4c7385b75841dae20b2a0a46\"" Jul 2 06:57:22.893738 containerd[1393]: time="2024-07-02T06:57:22.892785029Z" level=info msg="StartContainer for \"7151378f8b991a75d363a797bf90014455cecab0c16fad2a820a4532c31869df\" returns successfully" Jul 2 06:57:22.908819 containerd[1393]: time="2024-07-02T06:57:22.908033976Z" level=info msg="StartContainer for \"0008e247c75fbb0757f2b1df79bf6efca36982166f3ea8a439fa11c215c3bf31\" returns successfully" Jul 2 06:57:22.921658 containerd[1393]: time="2024-07-02T06:57:22.921296789Z" level=info msg="StartContainer for \"37ba2af1fc28c33fe42ac84f75c3320e08735f1e4c7385b75841dae20b2a0a46\" returns successfully" Jul 2 06:57:23.396729 kubelet[2073]: E0702 06:57:23.396687 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:23.398777 kubelet[2073]: E0702 06:57:23.398749 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:23.400558 kubelet[2073]: E0702 06:57:23.400532 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:23.460338 kubelet[2073]: I0702 06:57:23.460300 2073 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 06:57:24.246776 kubelet[2073]: I0702 06:57:24.246737 2073 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 06:57:24.335724 kubelet[2073]: I0702 06:57:24.335674 2073 apiserver.go:52] "Watching apiserver" Jul 2 06:57:24.350802 kubelet[2073]: I0702 06:57:24.350777 2073 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 06:57:24.845784 kubelet[2073]: E0702 06:57:24.845462 2073 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jul 2 06:57:24.852253 kubelet[2073]: E0702 06:57:24.852226 2073 kubelet.go:1890] "Failed creating a mirror pod for" err="namespaces \"kube-system\" not found" pod="kube-system/kube-controller-manager-localhost" Jul 2 06:57:24.852649 kubelet[2073]: E0702 06:57:24.852630 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:25.287009 kubelet[2073]: E0702 06:57:25.286968 2073 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 2 06:57:25.287483 kubelet[2073]: E0702 06:57:25.287466 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:25.287757 kubelet[2073]: E0702 06:57:25.287717 2073 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 2 06:57:25.288048 kubelet[2073]: E0702 06:57:25.288029 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:28.180539 systemd[1]: Reloading. Jul 2 06:57:29.018918 kubelet[2073]: E0702 06:57:29.018892 2073 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:29.251938 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:57:29.343824 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:57:29.344078 kubelet[2073]: I0702 06:57:29.343826 2073 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 06:57:29.352675 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 06:57:29.353012 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:57:29.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:29.353823 kernel: kauditd_printk_skb: 31 callbacks suppressed Jul 2 06:57:29.353889 kernel: audit: type=1131 audit(1719903449.351:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:29.360754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:57:29.455253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:57:29.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:29.462775 kernel: audit: type=1130 audit(1719903449.454:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:29.506615 kubelet[2418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:57:29.506615 kubelet[2418]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 06:57:29.506615 kubelet[2418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:57:29.508878 kubelet[2418]: I0702 06:57:29.508745 2418 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 06:57:29.514294 kubelet[2418]: I0702 06:57:29.514259 2418 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 06:57:29.514294 kubelet[2418]: I0702 06:57:29.514289 2418 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 06:57:29.514631 kubelet[2418]: I0702 06:57:29.514586 2418 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 06:57:29.516029 kubelet[2418]: I0702 06:57:29.515962 2418 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 06:57:29.516871 kubelet[2418]: I0702 06:57:29.516848 2418 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 06:57:29.523010 kubelet[2418]: I0702 06:57:29.522985 2418 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 06:57:29.523653 kubelet[2418]: I0702 06:57:29.523632 2418 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 06:57:29.523806 kubelet[2418]: I0702 06:57:29.523788 2418 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 06:57:29.523881 kubelet[2418]: I0702 06:57:29.523809 2418 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 06:57:29.523881 kubelet[2418]: I0702 06:57:29.523820 2418 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 06:57:29.523881 kubelet[2418]: I0702 06:57:29.523848 2418 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:57:29.523975 kubelet[2418]: I0702 06:57:29.523924 2418 kubelet.go:393] "Attempting to sync node with API server" Jul 2 06:57:29.523975 kubelet[2418]: I0702 06:57:29.523939 2418 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 06:57:29.523975 kubelet[2418]: I0702 06:57:29.523964 2418 kubelet.go:309] "Adding apiserver pod source" Jul 2 06:57:29.523975 kubelet[2418]: I0702 06:57:29.523978 2418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 06:57:29.526570 kubelet[2418]: I0702 06:57:29.526543 2418 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 06:57:29.527270 kubelet[2418]: I0702 06:57:29.527224 2418 server.go:1232] "Started kubelet" Jul 2 06:57:29.529531 kubelet[2418]: I0702 06:57:29.529513 2418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 06:57:29.531717 kubelet[2418]: I0702 06:57:29.531696 2418 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 06:57:29.532711 kubelet[2418]: I0702 06:57:29.532689 2418 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 06:57:29.532899 kubelet[2418]: E0702 06:57:29.532886 2418 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 06:57:29.532972 kubelet[2418]: E0702 06:57:29.532963 2418 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 06:57:29.533046 kubelet[2418]: I0702 06:57:29.532984 2418 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 06:57:29.533582 kubelet[2418]: I0702 06:57:29.533550 2418 server.go:462] "Adding debug handlers to kubelet server" Jul 2 06:57:29.535071 kubelet[2418]: I0702 06:57:29.535060 2418 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 06:57:29.536030 kubelet[2418]: I0702 06:57:29.536004 2418 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 06:57:29.536227 kubelet[2418]: I0702 06:57:29.536217 2418 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 06:57:29.546690 kubelet[2418]: I0702 06:57:29.546513 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 06:57:29.547790 kubelet[2418]: I0702 06:57:29.547771 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 06:57:29.547790 kubelet[2418]: I0702 06:57:29.547790 2418 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 06:57:29.547855 kubelet[2418]: I0702 06:57:29.547807 2418 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 06:57:29.547855 kubelet[2418]: E0702 06:57:29.547849 2418 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 06:57:29.600493 kubelet[2418]: I0702 06:57:29.600312 2418 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 06:57:29.600493 kubelet[2418]: I0702 06:57:29.600335 2418 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 06:57:29.600493 kubelet[2418]: I0702 06:57:29.600349 2418 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:57:29.600660 kubelet[2418]: I0702 06:57:29.600504 2418 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 06:57:29.600660 kubelet[2418]: I0702 06:57:29.600523 2418 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 06:57:29.600660 kubelet[2418]: I0702 06:57:29.600529 2418 policy_none.go:49] "None policy: Start" Jul 2 06:57:29.601251 kubelet[2418]: I0702 06:57:29.601230 2418 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 06:57:29.601300 kubelet[2418]: I0702 06:57:29.601282 2418 state_mem.go:35] "Initializing new in-memory state store" Jul 2 06:57:29.601501 kubelet[2418]: I0702 06:57:29.601489 2418 state_mem.go:75] "Updated machine memory state" Jul 2 06:57:29.602576 kubelet[2418]: I0702 06:57:29.602557 2418 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 06:57:29.602779 kubelet[2418]: I0702 06:57:29.602760 2418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 06:57:29.639303 kubelet[2418]: I0702 06:57:29.639269 2418 kubelet_node_status.go:70] "Attempting to register node" node="localhost" Jul 2 06:57:29.648038 kubelet[2418]: I0702 06:57:29.647990 2418 topology_manager.go:215] "Topology Admit Handler" podUID="a2e51272d4546d1d1bcb9f774f6f1d0a" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 06:57:29.648180 kubelet[2418]: I0702 06:57:29.648126 2418 topology_manager.go:215] "Topology Admit Handler" podUID="d27baad490d2d4f748c86b318d7d74ef" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 06:57:29.648180 kubelet[2418]: I0702 06:57:29.648168 2418 topology_manager.go:215] "Topology Admit Handler" podUID="9c3207d669e00aa24ded52617c0d65d0" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 06:57:29.655764 kubelet[2418]: I0702 06:57:29.655577 2418 kubelet_node_status.go:108] "Node was previously registered" node="localhost" Jul 2 06:57:29.655764 kubelet[2418]: I0702 06:57:29.655661 2418 kubelet_node_status.go:73] "Successfully registered node" node="localhost" Jul 2 06:57:29.656424 kubelet[2418]: E0702 06:57:29.656411 2418 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 06:57:29.739448 kubelet[2418]: I0702 06:57:29.739399 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:57:29.739448 kubelet[2418]: I0702 06:57:29.739448 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:57:29.739626 kubelet[2418]: I0702 06:57:29.739473 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c3207d669e00aa24ded52617c0d65d0-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"9c3207d669e00aa24ded52617c0d65d0\") " pod="kube-system/kube-scheduler-localhost" Jul 2 06:57:29.739626 kubelet[2418]: I0702 06:57:29.739548 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:57:29.739626 kubelet[2418]: I0702 06:57:29.739595 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:57:29.739626 kubelet[2418]: I0702 06:57:29.739621 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d27baad490d2d4f748c86b318d7d74ef-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d27baad490d2d4f748c86b318d7d74ef\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:57:29.739718 kubelet[2418]: I0702 06:57:29.739645 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a2e51272d4546d1d1bcb9f774f6f1d0a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a2e51272d4546d1d1bcb9f774f6f1d0a\") " pod="kube-system/kube-apiserver-localhost" Jul 2 06:57:29.739718 kubelet[2418]: I0702 06:57:29.739675 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a2e51272d4546d1d1bcb9f774f6f1d0a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a2e51272d4546d1d1bcb9f774f6f1d0a\") " pod="kube-system/kube-apiserver-localhost" Jul 2 06:57:29.739718 kubelet[2418]: I0702 06:57:29.739701 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a2e51272d4546d1d1bcb9f774f6f1d0a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a2e51272d4546d1d1bcb9f774f6f1d0a\") " pod="kube-system/kube-apiserver-localhost" Jul 2 06:57:29.957652 kubelet[2418]: E0702 06:57:29.957609 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:29.957834 kubelet[2418]: E0702 06:57:29.957728 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:29.958057 kubelet[2418]: E0702 06:57:29.958038 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:30.524610 kubelet[2418]: I0702 06:57:30.524542 2418 apiserver.go:52] "Watching apiserver" Jul 2 06:57:30.536761 kubelet[2418]: I0702 06:57:30.536700 2418 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 06:57:30.563276 kubelet[2418]: E0702 06:57:30.563242 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:30.563498 kubelet[2418]: E0702 06:57:30.563486 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:30.563870 kubelet[2418]: E0702 06:57:30.563850 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:30.599009 kubelet[2418]: I0702 06:57:30.598954 2418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.598902209 podCreationTimestamp="2024-07-02 06:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:57:30.598339083 +0000 UTC m=+1.136557150" watchObservedRunningTime="2024-07-02 06:57:30.598902209 +0000 UTC m=+1.137120266" Jul 2 06:57:30.631106 kubelet[2418]: I0702 06:57:30.631060 2418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.630999691 podCreationTimestamp="2024-07-02 06:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:57:30.615568209 +0000 UTC m=+1.153786256" watchObservedRunningTime="2024-07-02 06:57:30.630999691 +0000 UTC m=+1.169217758" Jul 2 06:57:30.631331 kubelet[2418]: I0702 06:57:30.631224 2418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.631159125 podCreationTimestamp="2024-07-02 06:57:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:57:30.630096605 +0000 UTC m=+1.168314672" watchObservedRunningTime="2024-07-02 06:57:30.631159125 +0000 UTC m=+1.169377182" Jul 2 06:57:31.563993 kubelet[2418]: E0702 06:57:31.563953 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:31.710282 kubelet[2418]: E0702 06:57:31.710250 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:34.973135 sudo[1565]: pam_unix(sudo:session): session closed for user root Jul 2 06:57:34.971000 audit[1565]: USER_END pid=1565 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:57:34.971000 audit[1565]: CRED_DISP pid=1565 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:57:34.980786 kernel: audit: type=1106 audit(1719903454.971:203): pid=1565 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:57:34.980849 kernel: audit: type=1104 audit(1719903454.971:204): pid=1565 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:57:34.987749 sshd[1561]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:34.989000 audit[1561]: USER_END pid=1561 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:57:34.993362 systemd[1]: sshd@6-10.0.0.85:22-10.0.0.1:44016.service: Deactivated successfully. Jul 2 06:57:34.994685 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 06:57:34.994784 systemd-logind[1375]: Session 7 logged out. Waiting for processes to exit. Jul 2 06:57:34.996011 systemd-logind[1375]: Removed session 7. Jul 2 06:57:34.989000 audit[1561]: CRED_DISP pid=1561 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:57:35.019871 kernel: audit: type=1106 audit(1719903454.989:205): pid=1561 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:57:35.019924 kernel: audit: type=1104 audit(1719903454.989:206): pid=1561 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:57:35.019941 kernel: audit: type=1131 audit(1719903454.989:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:44016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:34.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.85:22-10.0.0.1:44016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:36.059191 kubelet[2418]: E0702 06:57:36.058800 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:36.345433 kubelet[2418]: E0702 06:57:36.345318 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:36.588251 kubelet[2418]: E0702 06:57:36.588204 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:36.592811 kubelet[2418]: E0702 06:57:36.589013 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:36.789975 update_engine[1377]: I0702 06:57:36.787472 1377 update_attempter.cc:509] Updating boot flags... Jul 2 06:57:36.871490 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2509) Jul 2 06:57:36.903412 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2511) Jul 2 06:57:36.937407 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2511) Jul 2 06:57:37.589264 kubelet[2418]: E0702 06:57:37.589176 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:40.720801 kubelet[2418]: I0702 06:57:40.720763 2418 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 06:57:40.721201 containerd[1393]: time="2024-07-02T06:57:40.721156181Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 06:57:40.721461 kubelet[2418]: I0702 06:57:40.721393 2418 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 06:57:41.262267 kubelet[2418]: I0702 06:57:41.262219 2418 topology_manager.go:215] "Topology Admit Handler" podUID="450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12" podNamespace="kube-system" podName="kube-proxy-jqlzh" Jul 2 06:57:41.427977 kubelet[2418]: I0702 06:57:41.427925 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r47kx\" (UniqueName: \"kubernetes.io/projected/450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12-kube-api-access-r47kx\") pod \"kube-proxy-jqlzh\" (UID: \"450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12\") " pod="kube-system/kube-proxy-jqlzh" Jul 2 06:57:41.427977 kubelet[2418]: I0702 06:57:41.427976 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12-kube-proxy\") pod \"kube-proxy-jqlzh\" (UID: \"450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12\") " pod="kube-system/kube-proxy-jqlzh" Jul 2 06:57:41.427977 kubelet[2418]: I0702 06:57:41.427995 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12-xtables-lock\") pod \"kube-proxy-jqlzh\" (UID: \"450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12\") " pod="kube-system/kube-proxy-jqlzh" Jul 2 06:57:41.428214 kubelet[2418]: I0702 06:57:41.428033 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12-lib-modules\") pod \"kube-proxy-jqlzh\" (UID: \"450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12\") " pod="kube-system/kube-proxy-jqlzh" Jul 2 06:57:41.673894 kubelet[2418]: E0702 06:57:41.673845 2418 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 2 06:57:41.673894 kubelet[2418]: E0702 06:57:41.673885 2418 projected.go:198] Error preparing data for projected volume kube-api-access-r47kx for pod kube-system/kube-proxy-jqlzh: configmap "kube-root-ca.crt" not found Jul 2 06:57:41.674128 kubelet[2418]: E0702 06:57:41.673954 2418 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12-kube-api-access-r47kx podName:450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12 nodeName:}" failed. No retries permitted until 2024-07-02 06:57:42.173928324 +0000 UTC m=+12.712146381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r47kx" (UniqueName: "kubernetes.io/projected/450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12-kube-api-access-r47kx") pod "kube-proxy-jqlzh" (UID: "450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12") : configmap "kube-root-ca.crt" not found Jul 2 06:57:41.713961 kubelet[2418]: E0702 06:57:41.713938 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:42.438931 kubelet[2418]: I0702 06:57:42.438870 2418 topology_manager.go:215] "Topology Admit Handler" podUID="e11a549e-cf61-42a8-960c-b063630233da" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-hnb8q" Jul 2 06:57:42.466153 kubelet[2418]: E0702 06:57:42.466116 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:42.466586 containerd[1393]: time="2024-07-02T06:57:42.466549416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqlzh,Uid:450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12,Namespace:kube-system,Attempt:0,}" Jul 2 06:57:42.635397 kubelet[2418]: I0702 06:57:42.635316 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr78w\" (UniqueName: \"kubernetes.io/projected/e11a549e-cf61-42a8-960c-b063630233da-kube-api-access-cr78w\") pod \"tigera-operator-76c4974c85-hnb8q\" (UID: \"e11a549e-cf61-42a8-960c-b063630233da\") " pod="tigera-operator/tigera-operator-76c4974c85-hnb8q" Jul 2 06:57:42.635397 kubelet[2418]: I0702 06:57:42.635362 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e11a549e-cf61-42a8-960c-b063630233da-var-lib-calico\") pod \"tigera-operator-76c4974c85-hnb8q\" (UID: \"e11a549e-cf61-42a8-960c-b063630233da\") " pod="tigera-operator/tigera-operator-76c4974c85-hnb8q" Jul 2 06:57:42.918164 containerd[1393]: time="2024-07-02T06:57:42.918050681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:57:42.918164 containerd[1393]: time="2024-07-02T06:57:42.918115063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:42.918164 containerd[1393]: time="2024-07-02T06:57:42.918132756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:57:42.918164 containerd[1393]: time="2024-07-02T06:57:42.918146081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:42.952283 containerd[1393]: time="2024-07-02T06:57:42.952225434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jqlzh,Uid:450d61a9-b4e1-4a21-bdbd-ab7c88e7ce12,Namespace:kube-system,Attempt:0,} returns sandbox id \"102f7589333d81737ab94e3603384b2052adf466ec6da0352e16a7c9ba2f4307\"" Jul 2 06:57:42.952831 kubelet[2418]: E0702 06:57:42.952805 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:42.954711 containerd[1393]: time="2024-07-02T06:57:42.954674126Z" level=info msg="CreateContainer within sandbox \"102f7589333d81737ab94e3603384b2052adf466ec6da0352e16a7c9ba2f4307\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 06:57:43.043180 containerd[1393]: time="2024-07-02T06:57:43.043111486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-hnb8q,Uid:e11a549e-cf61-42a8-960c-b063630233da,Namespace:tigera-operator,Attempt:0,}" Jul 2 06:57:43.879805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9209118.mount: Deactivated successfully. Jul 2 06:57:44.501185 containerd[1393]: time="2024-07-02T06:57:44.501136497Z" level=info msg="CreateContainer within sandbox \"102f7589333d81737ab94e3603384b2052adf466ec6da0352e16a7c9ba2f4307\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fe04691dbb53112b1e476d7308f805dcd6c645561afcb82eb79eea41a8fcbdf2\"" Jul 2 06:57:44.504408 containerd[1393]: time="2024-07-02T06:57:44.504348616Z" level=info msg="StartContainer for \"fe04691dbb53112b1e476d7308f805dcd6c645561afcb82eb79eea41a8fcbdf2\"" Jul 2 06:57:44.511946 containerd[1393]: time="2024-07-02T06:57:44.511877774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:57:44.512116 containerd[1393]: time="2024-07-02T06:57:44.511956793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:44.512116 containerd[1393]: time="2024-07-02T06:57:44.512002400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:57:44.512116 containerd[1393]: time="2024-07-02T06:57:44.512030462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:44.593000 audit[2665]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.593000 audit[2665]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb7000e40 a2=0 a3=7ffcb7000e2c items=0 ppid=2608 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.603456 kernel: audit: type=1325 audit(1719903464.593:208): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2665 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.603575 kernel: audit: type=1300 audit(1719903464.593:208): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb7000e40 a2=0 a3=7ffcb7000e2c items=0 ppid=2608 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.593000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 06:57:44.605348 kernel: audit: type=1327 audit(1719903464.593:208): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 06:57:44.605406 kernel: audit: type=1325 audit(1719903464.596:209): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2664 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.596000 audit[2664]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2664 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.596000 audit[2664]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc0771d40 a2=0 a3=7fffc0771d2c items=0 ppid=2608 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.610618 kernel: audit: type=1300 audit(1719903464.596:209): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc0771d40 a2=0 a3=7fffc0771d2c items=0 ppid=2608 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.610657 kernel: audit: type=1327 audit(1719903464.596:209): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 06:57:44.596000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 06:57:44.612301 kernel: audit: type=1325 audit(1719903464.597:210): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2669 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.597000 audit[2669]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2669 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.613987 kernel: audit: type=1300 audit(1719903464.597:210): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc411aae30 a2=0 a3=7ffc411aae1c items=0 ppid=2608 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.597000 audit[2669]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc411aae30 a2=0 a3=7ffc411aae1c items=0 ppid=2608 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.617576 kernel: audit: type=1327 audit(1719903464.597:210): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 06:57:44.597000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 06:57:44.619293 kernel: audit: type=1325 audit(1719903464.598:211): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2670 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.598000 audit[2670]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2670 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.598000 audit[2670]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe749991a0 a2=0 a3=7ffe7499918c items=0 ppid=2608 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.598000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 06:57:44.602000 audit[2668]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.602000 audit[2668]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff815e8e90 a2=0 a3=7fff815e8e7c items=0 ppid=2608 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.602000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 06:57:44.603000 audit[2671]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2671 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.603000 audit[2671]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd38513de0 a2=0 a3=7ffd38513dcc items=0 ppid=2608 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.603000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 06:57:44.663797 containerd[1393]: time="2024-07-02T06:57:44.663715073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-hnb8q,Uid:e11a549e-cf61-42a8-960c-b063630233da,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"850452480fe3c61a38c7836df0223e266fc47ccad381e27722f95612c6051726\"" Jul 2 06:57:44.663797 containerd[1393]: time="2024-07-02T06:57:44.663750049Z" level=info msg="StartContainer for \"fe04691dbb53112b1e476d7308f805dcd6c645561afcb82eb79eea41a8fcbdf2\" returns successfully" Jul 2 06:57:44.665861 containerd[1393]: time="2024-07-02T06:57:44.665834868Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 06:57:44.698000 audit[2672]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.698000 audit[2672]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc0d7ddf30 a2=0 a3=7ffc0d7ddf1c items=0 ppid=2608 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 06:57:44.701000 audit[2674]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2674 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.701000 audit[2674]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcb5956110 a2=0 a3=7ffcb59560fc items=0 ppid=2608 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.701000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 2 06:57:44.704000 audit[2677]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.704000 audit[2677]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff266e3da0 a2=0 a3=7fff266e3d8c items=0 ppid=2608 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.704000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 2 06:57:44.705000 audit[2678]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2678 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.705000 audit[2678]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff18626440 a2=0 a3=7fff1862642c items=0 ppid=2608 pid=2678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.705000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 06:57:44.707000 audit[2680]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.707000 audit[2680]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd480e4300 a2=0 a3=7ffd480e42ec items=0 ppid=2608 pid=2680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.707000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 06:57:44.708000 audit[2681]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.708000 audit[2681]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd87addf30 a2=0 a3=7ffd87addf1c items=0 ppid=2608 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 06:57:44.711000 audit[2683]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.711000 audit[2683]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdde0e92f0 a2=0 a3=7ffdde0e92dc items=0 ppid=2608 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.711000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 06:57:44.714000 audit[2686]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2686 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.714000 audit[2686]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe583d6280 a2=0 a3=7ffe583d626c items=0 ppid=2608 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 2 06:57:44.715000 audit[2687]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.715000 audit[2687]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc46302a0 a2=0 a3=7fffc463028c items=0 ppid=2608 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.715000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 06:57:44.718000 audit[2689]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2689 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.718000 audit[2689]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd90b72de0 a2=0 a3=7ffd90b72dcc items=0 ppid=2608 pid=2689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.718000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 06:57:44.719000 audit[2690]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2690 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.719000 audit[2690]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdc27afc10 a2=0 a3=7ffdc27afbfc items=0 ppid=2608 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 06:57:44.721000 audit[2692]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2692 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.721000 audit[2692]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe761f9fc0 a2=0 a3=7ffe761f9fac items=0 ppid=2608 pid=2692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.721000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 06:57:44.724000 audit[2695]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2695 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.724000 audit[2695]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc6810a370 a2=0 a3=7ffc6810a35c items=0 ppid=2608 pid=2695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.724000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 06:57:44.728000 audit[2698]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2698 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.728000 audit[2698]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff6d322060 a2=0 a3=7fff6d32204c items=0 ppid=2608 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.728000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 06:57:44.729000 audit[2699]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.729000 audit[2699]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc41f51d50 a2=0 a3=7ffc41f51d3c items=0 ppid=2608 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 06:57:44.731000 audit[2701]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2701 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.731000 audit[2701]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdba802880 a2=0 a3=7ffdba80286c items=0 ppid=2608 pid=2701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.731000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 06:57:44.735000 audit[2704]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.735000 audit[2704]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd756cbce0 a2=0 a3=7ffd756cbccc items=0 ppid=2608 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.735000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 06:57:44.736000 audit[2705]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.736000 audit[2705]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd11787a50 a2=0 a3=7ffd11787a3c items=0 ppid=2608 pid=2705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.736000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 06:57:44.739000 audit[2707]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2707 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:57:44.739000 audit[2707]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffecf4cc40 a2=0 a3=7fffecf4cc2c items=0 ppid=2608 pid=2707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 06:57:44.754000 audit[2713]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:44.754000 audit[2713]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe5c773a20 a2=0 a3=7ffe5c773a0c items=0 ppid=2608 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.754000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:44.762000 audit[2713]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2713 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:44.762000 audit[2713]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffe5c773a20 a2=0 a3=7ffe5c773a0c items=0 ppid=2608 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:44.763000 audit[2719]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2719 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.763000 audit[2719]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff17201650 a2=0 a3=7fff1720163c items=0 ppid=2608 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.763000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 06:57:44.766000 audit[2721]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2721 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.766000 audit[2721]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe9909c920 a2=0 a3=7ffe9909c90c items=0 ppid=2608 pid=2721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.766000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 2 06:57:44.770000 audit[2724]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2724 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.770000 audit[2724]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe2ca541b0 a2=0 a3=7ffe2ca5419c items=0 ppid=2608 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.770000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 2 06:57:44.771000 audit[2725]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2725 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.771000 audit[2725]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec5f3c270 a2=0 a3=7ffec5f3c25c items=0 ppid=2608 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.771000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 06:57:44.773000 audit[2727]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.773000 audit[2727]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc979d4940 a2=0 a3=7ffc979d492c items=0 ppid=2608 pid=2727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 06:57:44.774000 audit[2728]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2728 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.774000 audit[2728]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca3498b70 a2=0 a3=7ffca3498b5c items=0 ppid=2608 pid=2728 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.774000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 06:57:44.777000 audit[2730]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2730 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.777000 audit[2730]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe0c24c0f0 a2=0 a3=7ffe0c24c0dc items=0 ppid=2608 pid=2730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.777000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 2 06:57:44.780000 audit[2733]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2733 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.780000 audit[2733]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff22ca7f60 a2=0 a3=7fff22ca7f4c items=0 ppid=2608 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.780000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 06:57:44.781000 audit[2734]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2734 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.781000 audit[2734]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffecf22310 a2=0 a3=7fffecf222fc items=0 ppid=2608 pid=2734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.781000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 06:57:44.784000 audit[2736]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2736 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.784000 audit[2736]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff69f9dcb0 a2=0 a3=7fff69f9dc9c items=0 ppid=2608 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 06:57:44.785000 audit[2737]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2737 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.785000 audit[2737]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1ac2a2b0 a2=0 a3=7ffd1ac2a29c items=0 ppid=2608 pid=2737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.785000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 06:57:44.788000 audit[2739]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2739 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.788000 audit[2739]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe04a248a0 a2=0 a3=7ffe04a2488c items=0 ppid=2608 pid=2739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.788000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 06:57:44.791000 audit[2742]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2742 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.791000 audit[2742]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe4ece4440 a2=0 a3=7ffe4ece442c items=0 ppid=2608 pid=2742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 06:57:44.795000 audit[2745]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2745 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.795000 audit[2745]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdcc67da80 a2=0 a3=7ffdcc67da6c items=0 ppid=2608 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 2 06:57:44.796000 audit[2746]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2746 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.796000 audit[2746]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffdfe755b0 a2=0 a3=7fffdfe7559c items=0 ppid=2608 pid=2746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.796000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 06:57:44.798000 audit[2748]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2748 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.798000 audit[2748]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff35ec9c90 a2=0 a3=7fff35ec9c7c items=0 ppid=2608 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.798000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 06:57:44.801000 audit[2751]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2751 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.801000 audit[2751]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffffa496650 a2=0 a3=7ffffa49663c items=0 ppid=2608 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 06:57:44.802000 audit[2752]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2752 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.802000 audit[2752]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffeacd8990 a2=0 a3=7fffeacd897c items=0 ppid=2608 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 06:57:44.804000 audit[2754]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2754 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.804000 audit[2754]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcc345a350 a2=0 a3=7ffcc345a33c items=0 ppid=2608 pid=2754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 06:57:44.805000 audit[2755]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2755 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.805000 audit[2755]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf2482700 a2=0 a3=7ffcf24826ec items=0 ppid=2608 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 06:57:44.807000 audit[2757]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2757 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.807000 audit[2757]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff055fdf00 a2=0 a3=7fff055fdeec items=0 ppid=2608 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 06:57:44.810000 audit[2760]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2760 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:57:44.810000 audit[2760]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc561b7050 a2=0 a3=7ffc561b703c items=0 ppid=2608 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 06:57:44.813000 audit[2762]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2762 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 06:57:44.813000 audit[2762]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7fffb7e53010 a2=0 a3=7fffb7e52ffc items=0 ppid=2608 pid=2762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.813000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:44.813000 audit[2762]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2762 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 06:57:44.813000 audit[2762]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffb7e53010 a2=0 a3=7fffb7e52ffc items=0 ppid=2608 pid=2762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:44.813000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:45.667941 kubelet[2418]: E0702 06:57:45.667906 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:46.476341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2824205768.mount: Deactivated successfully. Jul 2 06:57:46.669156 kubelet[2418]: E0702 06:57:46.669126 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:46.843286 containerd[1393]: time="2024-07-02T06:57:46.843128720Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:46.844327 containerd[1393]: time="2024-07-02T06:57:46.844272829Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076052" Jul 2 06:57:46.845934 containerd[1393]: time="2024-07-02T06:57:46.845904349Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:46.848061 containerd[1393]: time="2024-07-02T06:57:46.848021266Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:46.850043 containerd[1393]: time="2024-07-02T06:57:46.850014869Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:46.850626 containerd[1393]: time="2024-07-02T06:57:46.850593411Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.184723096s" Jul 2 06:57:46.850661 containerd[1393]: time="2024-07-02T06:57:46.850624170Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 06:57:46.852113 containerd[1393]: time="2024-07-02T06:57:46.852075009Z" level=info msg="CreateContainer within sandbox \"850452480fe3c61a38c7836df0223e266fc47ccad381e27722f95612c6051726\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 06:57:47.128229 containerd[1393]: time="2024-07-02T06:57:47.128165329Z" level=info msg="CreateContainer within sandbox \"850452480fe3c61a38c7836df0223e266fc47ccad381e27722f95612c6051726\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"997fec032391c28eff497864f8ee364e8ce4ff497a019dff9399b1a4c7fa9cdf\"" Jul 2 06:57:47.128779 containerd[1393]: time="2024-07-02T06:57:47.128699617Z" level=info msg="StartContainer for \"997fec032391c28eff497864f8ee364e8ce4ff497a019dff9399b1a4c7fa9cdf\"" Jul 2 06:57:47.176473 containerd[1393]: time="2024-07-02T06:57:47.176364405Z" level=info msg="StartContainer for \"997fec032391c28eff497864f8ee364e8ce4ff497a019dff9399b1a4c7fa9cdf\" returns successfully" Jul 2 06:57:47.678937 kubelet[2418]: I0702 06:57:47.678885 2418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jqlzh" podStartSLOduration=6.678838284 podCreationTimestamp="2024-07-02 06:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:57:45.675112604 +0000 UTC m=+16.213330661" watchObservedRunningTime="2024-07-02 06:57:47.678838284 +0000 UTC m=+18.217056341" Jul 2 06:57:47.679428 kubelet[2418]: I0702 06:57:47.679008 2418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-hnb8q" podStartSLOduration=3.493040918 podCreationTimestamp="2024-07-02 06:57:42 +0000 UTC" firstStartedPulling="2024-07-02 06:57:44.664906604 +0000 UTC m=+15.203124661" lastFinishedPulling="2024-07-02 06:57:46.850849685 +0000 UTC m=+17.389067742" observedRunningTime="2024-07-02 06:57:47.678727445 +0000 UTC m=+18.216945502" watchObservedRunningTime="2024-07-02 06:57:47.678983999 +0000 UTC m=+18.217202066" Jul 2 06:57:49.853000 audit[2811]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:49.854564 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 2 06:57:49.854627 kernel: audit: type=1325 audit(1719903469.853:259): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:49.853000 audit[2811]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd38de5660 a2=0 a3=7ffd38de564c items=0 ppid=2608 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:49.861211 kernel: audit: type=1300 audit(1719903469.853:259): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd38de5660 a2=0 a3=7ffd38de564c items=0 ppid=2608 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:49.861266 kernel: audit: type=1327 audit(1719903469.853:259): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:49.853000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:49.853000 audit[2811]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:49.853000 audit[2811]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd38de5660 a2=0 a3=0 items=0 ppid=2608 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:49.870761 kernel: audit: type=1325 audit(1719903469.853:260): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2811 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:49.870864 kernel: audit: type=1300 audit(1719903469.853:260): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd38de5660 a2=0 a3=0 items=0 ppid=2608 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:49.870925 kernel: audit: type=1327 audit(1719903469.853:260): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:49.853000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:49.868000 audit[2813]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:49.868000 audit[2813]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff3c403650 a2=0 a3=7fff3c40363c items=0 ppid=2608 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:49.881468 kernel: audit: type=1325 audit(1719903469.868:261): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:49.881533 kernel: audit: type=1300 audit(1719903469.868:261): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff3c403650 a2=0 a3=7fff3c40363c items=0 ppid=2608 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:49.881566 kernel: audit: type=1327 audit(1719903469.868:261): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:49.868000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:49.884000 audit[2813]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:49.884000 audit[2813]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3c403650 a2=0 a3=0 items=0 ppid=2608 pid=2813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:49.884000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:49.887400 kernel: audit: type=1325 audit(1719903469.884:262): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2813 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:49.971969 kubelet[2418]: I0702 06:57:49.971919 2418 topology_manager.go:215] "Topology Admit Handler" podUID="aca6abe1-a7ec-424b-81e9-dd4caba28d05" podNamespace="calico-system" podName="calico-typha-5b6c84484-8gcbl" Jul 2 06:57:50.057153 kubelet[2418]: I0702 06:57:50.057111 2418 topology_manager.go:215] "Topology Admit Handler" podUID="0492bf70-9894-4989-a37b-b42ca0e87244" podNamespace="calico-system" podName="calico-node-pqtls" Jul 2 06:57:50.081065 kubelet[2418]: I0702 06:57:50.080998 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aca6abe1-a7ec-424b-81e9-dd4caba28d05-typha-certs\") pod \"calico-typha-5b6c84484-8gcbl\" (UID: \"aca6abe1-a7ec-424b-81e9-dd4caba28d05\") " pod="calico-system/calico-typha-5b6c84484-8gcbl" Jul 2 06:57:50.081065 kubelet[2418]: I0702 06:57:50.081050 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdc42\" (UniqueName: \"kubernetes.io/projected/aca6abe1-a7ec-424b-81e9-dd4caba28d05-kube-api-access-kdc42\") pod \"calico-typha-5b6c84484-8gcbl\" (UID: \"aca6abe1-a7ec-424b-81e9-dd4caba28d05\") " pod="calico-system/calico-typha-5b6c84484-8gcbl" Jul 2 06:57:50.081065 kubelet[2418]: I0702 06:57:50.081073 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aca6abe1-a7ec-424b-81e9-dd4caba28d05-tigera-ca-bundle\") pod \"calico-typha-5b6c84484-8gcbl\" (UID: \"aca6abe1-a7ec-424b-81e9-dd4caba28d05\") " pod="calico-system/calico-typha-5b6c84484-8gcbl" Jul 2 06:57:50.182021 kubelet[2418]: I0702 06:57:50.181968 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-lib-modules\") pod \"calico-node-pqtls\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " pod="calico-system/calico-node-pqtls" Jul 2 06:57:50.182021 kubelet[2418]: I0702 06:57:50.182010 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-var-run-calico\") pod \"calico-node-pqtls\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " pod="calico-system/calico-node-pqtls" Jul 2 06:57:50.182021 kubelet[2418]: I0702 06:57:50.182034 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-flexvol-driver-host\") pod \"calico-node-pqtls\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " pod="calico-system/calico-node-pqtls" Jul 2 06:57:50.182296 kubelet[2418]: I0702 06:57:50.182057 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0492bf70-9894-4989-a37b-b42ca0e87244-tigera-ca-bundle\") pod \"calico-node-pqtls\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " pod="calico-system/calico-node-pqtls" Jul 2 06:57:50.182296 kubelet[2418]: I0702 06:57:50.182080 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-var-lib-calico\") pod \"calico-node-pqtls\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " pod="calico-system/calico-node-pqtls" Jul 2 06:57:50.182296 kubelet[2418]: I0702 06:57:50.182101 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-cni-net-dir\") pod \"calico-node-pqtls\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " pod="calico-system/calico-node-pqtls" Jul 2 06:57:50.182296 kubelet[2418]: I0702 06:57:50.182125 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-policysync\") pod \"calico-node-pqtls\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " pod="calico-system/calico-node-pqtls" Jul 2 06:57:50.182296 kubelet[2418]: I0702 06:57:50.182194 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-cni-bin-dir\") pod \"calico-node-pqtls\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " pod="calico-system/calico-node-pqtls" Jul 2 06:57:50.182507 kubelet[2418]: I0702 06:57:50.182301 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-xtables-lock\") pod \"calico-node-pqtls\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " pod="calico-system/calico-node-pqtls" Jul 2 06:57:50.182554 kubelet[2418]: I0702 06:57:50.182544 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-cni-log-dir\") pod \"calico-node-pqtls\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " pod="calico-system/calico-node-pqtls" Jul 2 06:57:50.182592 kubelet[2418]: I0702 06:57:50.182567 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbcf2\" (UniqueName: \"kubernetes.io/projected/0492bf70-9894-4989-a37b-b42ca0e87244-kube-api-access-pbcf2\") pod \"calico-node-pqtls\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " pod="calico-system/calico-node-pqtls" Jul 2 06:57:50.182627 kubelet[2418]: I0702 06:57:50.182605 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0492bf70-9894-4989-a37b-b42ca0e87244-node-certs\") pod \"calico-node-pqtls\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " pod="calico-system/calico-node-pqtls" Jul 2 06:57:50.284787 kubelet[2418]: E0702 06:57:50.284752 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.284787 kubelet[2418]: W0702 06:57:50.284776 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.284787 kubelet[2418]: E0702 06:57:50.284796 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.285017 kubelet[2418]: E0702 06:57:50.284923 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.285017 kubelet[2418]: W0702 06:57:50.284931 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.285017 kubelet[2418]: E0702 06:57:50.284945 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.285099 kubelet[2418]: E0702 06:57:50.285079 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.285099 kubelet[2418]: W0702 06:57:50.285096 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.285155 kubelet[2418]: E0702 06:57:50.285108 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.285285 kubelet[2418]: E0702 06:57:50.285267 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.285285 kubelet[2418]: W0702 06:57:50.285279 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.285342 kubelet[2418]: E0702 06:57:50.285293 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.285456 kubelet[2418]: E0702 06:57:50.285437 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.285456 kubelet[2418]: W0702 06:57:50.285451 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.285513 kubelet[2418]: E0702 06:57:50.285463 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.285638 kubelet[2418]: E0702 06:57:50.285616 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.285638 kubelet[2418]: W0702 06:57:50.285633 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.285731 kubelet[2418]: E0702 06:57:50.285710 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.285826 kubelet[2418]: E0702 06:57:50.285803 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.285826 kubelet[2418]: W0702 06:57:50.285813 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.286041 kubelet[2418]: E0702 06:57:50.285897 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.286078 kubelet[2418]: E0702 06:57:50.286046 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.286078 kubelet[2418]: W0702 06:57:50.286061 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.286177 kubelet[2418]: E0702 06:57:50.286145 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.286353 kubelet[2418]: E0702 06:57:50.286333 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.286353 kubelet[2418]: W0702 06:57:50.286343 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.286497 kubelet[2418]: E0702 06:57:50.286413 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.286555 kubelet[2418]: E0702 06:57:50.286538 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.286555 kubelet[2418]: W0702 06:57:50.286549 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.286625 kubelet[2418]: E0702 06:57:50.286576 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.286718 kubelet[2418]: E0702 06:57:50.286702 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.286718 kubelet[2418]: W0702 06:57:50.286712 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.286782 kubelet[2418]: E0702 06:57:50.286729 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.286945 kubelet[2418]: E0702 06:57:50.286931 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.286945 kubelet[2418]: W0702 06:57:50.286941 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.287018 kubelet[2418]: E0702 06:57:50.286955 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.287138 kubelet[2418]: E0702 06:57:50.287122 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.287138 kubelet[2418]: W0702 06:57:50.287135 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.287225 kubelet[2418]: E0702 06:57:50.287154 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.287343 kubelet[2418]: E0702 06:57:50.287323 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.287414 kubelet[2418]: W0702 06:57:50.287398 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.287486 kubelet[2418]: E0702 06:57:50.287421 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.287607 kubelet[2418]: E0702 06:57:50.287597 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.287662 kubelet[2418]: W0702 06:57:50.287654 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.287749 kubelet[2418]: E0702 06:57:50.287727 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.288005 kubelet[2418]: E0702 06:57:50.287987 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.288070 kubelet[2418]: W0702 06:57:50.288061 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.288216 kubelet[2418]: E0702 06:57:50.288200 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.288400 kubelet[2418]: E0702 06:57:50.288387 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.288400 kubelet[2418]: W0702 06:57:50.288399 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.288495 kubelet[2418]: E0702 06:57:50.288472 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.288635 kubelet[2418]: E0702 06:57:50.288615 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.288635 kubelet[2418]: W0702 06:57:50.288626 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.288635 kubelet[2418]: E0702 06:57:50.288646 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.288802 kubelet[2418]: E0702 06:57:50.288788 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.288802 kubelet[2418]: W0702 06:57:50.288798 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.288866 kubelet[2418]: E0702 06:57:50.288813 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.288999 kubelet[2418]: E0702 06:57:50.288979 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.289074 kubelet[2418]: W0702 06:57:50.289041 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.289074 kubelet[2418]: E0702 06:57:50.289063 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.289280 kubelet[2418]: E0702 06:57:50.289265 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.289280 kubelet[2418]: W0702 06:57:50.289276 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.289363 kubelet[2418]: E0702 06:57:50.289294 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.289545 kubelet[2418]: E0702 06:57:50.289530 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.289545 kubelet[2418]: W0702 06:57:50.289541 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.289642 kubelet[2418]: E0702 06:57:50.289558 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.289724 kubelet[2418]: E0702 06:57:50.289699 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.289724 kubelet[2418]: W0702 06:57:50.289708 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.289724 kubelet[2418]: E0702 06:57:50.289717 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.289910 kubelet[2418]: E0702 06:57:50.289877 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.289978 kubelet[2418]: W0702 06:57:50.289954 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.289978 kubelet[2418]: E0702 06:57:50.289974 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.306190 kubelet[2418]: E0702 06:57:50.306074 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.306190 kubelet[2418]: W0702 06:57:50.306095 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.306190 kubelet[2418]: E0702 06:57:50.306121 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.310190 kubelet[2418]: I0702 06:57:50.310173 2418 topology_manager.go:215] "Topology Admit Handler" podUID="3da56065-eacb-45a3-bb8d-c1271ca90971" podNamespace="calico-system" podName="csi-node-driver-hth2l" Jul 2 06:57:50.314108 kubelet[2418]: E0702 06:57:50.312266 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hth2l" podUID="3da56065-eacb-45a3-bb8d-c1271ca90971" Jul 2 06:57:50.324205 kubelet[2418]: E0702 06:57:50.322738 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.324205 kubelet[2418]: W0702 06:57:50.322757 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.324205 kubelet[2418]: E0702 06:57:50.322783 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.360271 kubelet[2418]: E0702 06:57:50.360235 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:50.360915 containerd[1393]: time="2024-07-02T06:57:50.360853006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pqtls,Uid:0492bf70-9894-4989-a37b-b42ca0e87244,Namespace:calico-system,Attempt:0,}" Jul 2 06:57:50.384201 kubelet[2418]: E0702 06:57:50.384173 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.384416 kubelet[2418]: W0702 06:57:50.384395 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.384498 kubelet[2418]: E0702 06:57:50.384486 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.384781 kubelet[2418]: E0702 06:57:50.384770 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.384857 kubelet[2418]: W0702 06:57:50.384846 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.384936 kubelet[2418]: E0702 06:57:50.384921 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.385130 kubelet[2418]: E0702 06:57:50.385117 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.385130 kubelet[2418]: W0702 06:57:50.385128 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.385227 kubelet[2418]: E0702 06:57:50.385141 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.385329 kubelet[2418]: E0702 06:57:50.385316 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.385329 kubelet[2418]: W0702 06:57:50.385327 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.385329 kubelet[2418]: E0702 06:57:50.385339 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.385546 kubelet[2418]: E0702 06:57:50.385532 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.385546 kubelet[2418]: W0702 06:57:50.385543 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.385546 kubelet[2418]: E0702 06:57:50.385554 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.385734 kubelet[2418]: E0702 06:57:50.385721 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.385734 kubelet[2418]: W0702 06:57:50.385731 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.385827 kubelet[2418]: E0702 06:57:50.385757 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.385970 kubelet[2418]: E0702 06:57:50.385956 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.385970 kubelet[2418]: W0702 06:57:50.385968 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.386071 kubelet[2418]: E0702 06:57:50.385982 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.386179 kubelet[2418]: E0702 06:57:50.386167 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.386179 kubelet[2418]: W0702 06:57:50.386177 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.386254 kubelet[2418]: E0702 06:57:50.386191 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.386393 kubelet[2418]: E0702 06:57:50.386368 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.386466 kubelet[2418]: W0702 06:57:50.386456 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.386528 kubelet[2418]: E0702 06:57:50.386516 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.386692 kubelet[2418]: E0702 06:57:50.386681 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.386692 kubelet[2418]: W0702 06:57:50.386691 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.386778 kubelet[2418]: E0702 06:57:50.386705 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.386879 kubelet[2418]: E0702 06:57:50.386865 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.386948 kubelet[2418]: W0702 06:57:50.386876 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.386948 kubelet[2418]: E0702 06:57:50.386909 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.387104 kubelet[2418]: E0702 06:57:50.387080 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.387104 kubelet[2418]: W0702 06:57:50.387102 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.387203 kubelet[2418]: E0702 06:57:50.387116 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.387323 kubelet[2418]: E0702 06:57:50.387311 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.387323 kubelet[2418]: W0702 06:57:50.387321 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.387431 kubelet[2418]: E0702 06:57:50.387334 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.387532 kubelet[2418]: E0702 06:57:50.387520 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.387532 kubelet[2418]: W0702 06:57:50.387530 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.387618 kubelet[2418]: E0702 06:57:50.387543 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.387722 kubelet[2418]: E0702 06:57:50.387710 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.387722 kubelet[2418]: W0702 06:57:50.387720 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.387799 kubelet[2418]: E0702 06:57:50.387732 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.387978 kubelet[2418]: E0702 06:57:50.387918 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.387978 kubelet[2418]: W0702 06:57:50.387930 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.387978 kubelet[2418]: E0702 06:57:50.387942 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.388176 kubelet[2418]: E0702 06:57:50.388162 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.388176 kubelet[2418]: W0702 06:57:50.388172 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.388250 kubelet[2418]: E0702 06:57:50.388188 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.388413 kubelet[2418]: E0702 06:57:50.388391 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.388413 kubelet[2418]: W0702 06:57:50.388403 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.388413 kubelet[2418]: E0702 06:57:50.388416 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.388626 kubelet[2418]: E0702 06:57:50.388583 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.388626 kubelet[2418]: W0702 06:57:50.388592 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.388626 kubelet[2418]: E0702 06:57:50.388605 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.388764 kubelet[2418]: E0702 06:57:50.388751 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.388764 kubelet[2418]: W0702 06:57:50.388765 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.388866 kubelet[2418]: E0702 06:57:50.388777 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.389053 kubelet[2418]: E0702 06:57:50.389040 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.389053 kubelet[2418]: W0702 06:57:50.389049 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.389129 kubelet[2418]: E0702 06:57:50.389058 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.389129 kubelet[2418]: I0702 06:57:50.389088 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3da56065-eacb-45a3-bb8d-c1271ca90971-socket-dir\") pod \"csi-node-driver-hth2l\" (UID: \"3da56065-eacb-45a3-bb8d-c1271ca90971\") " pod="calico-system/csi-node-driver-hth2l" Jul 2 06:57:50.389315 kubelet[2418]: E0702 06:57:50.389299 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.389315 kubelet[2418]: W0702 06:57:50.389312 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.389406 kubelet[2418]: E0702 06:57:50.389332 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.389406 kubelet[2418]: I0702 06:57:50.389353 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3da56065-eacb-45a3-bb8d-c1271ca90971-kubelet-dir\") pod \"csi-node-driver-hth2l\" (UID: \"3da56065-eacb-45a3-bb8d-c1271ca90971\") " pod="calico-system/csi-node-driver-hth2l" Jul 2 06:57:50.389616 kubelet[2418]: E0702 06:57:50.389589 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.389616 kubelet[2418]: W0702 06:57:50.389606 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.389667 kubelet[2418]: E0702 06:57:50.389626 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.389667 kubelet[2418]: I0702 06:57:50.389648 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s45sk\" (UniqueName: \"kubernetes.io/projected/3da56065-eacb-45a3-bb8d-c1271ca90971-kube-api-access-s45sk\") pod \"csi-node-driver-hth2l\" (UID: \"3da56065-eacb-45a3-bb8d-c1271ca90971\") " pod="calico-system/csi-node-driver-hth2l" Jul 2 06:57:50.389892 kubelet[2418]: E0702 06:57:50.389854 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.389892 kubelet[2418]: W0702 06:57:50.389876 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.389963 kubelet[2418]: E0702 06:57:50.389911 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.389963 kubelet[2418]: I0702 06:57:50.389944 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3da56065-eacb-45a3-bb8d-c1271ca90971-varrun\") pod \"csi-node-driver-hth2l\" (UID: \"3da56065-eacb-45a3-bb8d-c1271ca90971\") " pod="calico-system/csi-node-driver-hth2l" Jul 2 06:57:50.390133 kubelet[2418]: E0702 06:57:50.390121 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.390133 kubelet[2418]: W0702 06:57:50.390131 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.390192 kubelet[2418]: E0702 06:57:50.390140 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.390192 kubelet[2418]: I0702 06:57:50.390155 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3da56065-eacb-45a3-bb8d-c1271ca90971-registration-dir\") pod \"csi-node-driver-hth2l\" (UID: \"3da56065-eacb-45a3-bb8d-c1271ca90971\") " pod="calico-system/csi-node-driver-hth2l" Jul 2 06:57:50.390343 kubelet[2418]: E0702 06:57:50.390327 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.390343 kubelet[2418]: W0702 06:57:50.390342 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.390414 kubelet[2418]: E0702 06:57:50.390357 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.390548 kubelet[2418]: E0702 06:57:50.390534 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.390576 kubelet[2418]: W0702 06:57:50.390547 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.390625 kubelet[2418]: E0702 06:57:50.390611 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.390744 kubelet[2418]: E0702 06:57:50.390736 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.390776 kubelet[2418]: W0702 06:57:50.390745 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.390855 kubelet[2418]: E0702 06:57:50.390826 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.390941 kubelet[2418]: E0702 06:57:50.390891 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.390941 kubelet[2418]: W0702 06:57:50.390897 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.390941 kubelet[2418]: E0702 06:57:50.390908 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.391040 kubelet[2418]: E0702 06:57:50.391020 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.391040 kubelet[2418]: W0702 06:57:50.391026 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.391040 kubelet[2418]: E0702 06:57:50.391035 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.391180 kubelet[2418]: E0702 06:57:50.391164 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.391180 kubelet[2418]: W0702 06:57:50.391175 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.391254 kubelet[2418]: E0702 06:57:50.391187 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.391322 kubelet[2418]: E0702 06:57:50.391309 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.391322 kubelet[2418]: W0702 06:57:50.391320 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.391446 kubelet[2418]: E0702 06:57:50.391331 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.391516 kubelet[2418]: E0702 06:57:50.391500 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.391516 kubelet[2418]: W0702 06:57:50.391511 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.391582 kubelet[2418]: E0702 06:57:50.391523 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.391687 kubelet[2418]: E0702 06:57:50.391671 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.391687 kubelet[2418]: W0702 06:57:50.391683 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.391764 kubelet[2418]: E0702 06:57:50.391696 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.391855 kubelet[2418]: E0702 06:57:50.391838 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.391855 kubelet[2418]: W0702 06:57:50.391850 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.391945 kubelet[2418]: E0702 06:57:50.391862 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.491317 kubelet[2418]: E0702 06:57:50.491197 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.491317 kubelet[2418]: W0702 06:57:50.491220 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.491317 kubelet[2418]: E0702 06:57:50.491243 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.491541 kubelet[2418]: E0702 06:57:50.491437 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.491541 kubelet[2418]: W0702 06:57:50.491447 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.491541 kubelet[2418]: E0702 06:57:50.491468 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.491714 kubelet[2418]: E0702 06:57:50.491699 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.491714 kubelet[2418]: W0702 06:57:50.491709 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.491838 kubelet[2418]: E0702 06:57:50.491725 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.492061 kubelet[2418]: E0702 06:57:50.492048 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.492175 kubelet[2418]: W0702 06:57:50.492148 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.492259 kubelet[2418]: E0702 06:57:50.492175 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.492532 kubelet[2418]: E0702 06:57:50.492519 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.492532 kubelet[2418]: W0702 06:57:50.492531 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.492613 kubelet[2418]: E0702 06:57:50.492575 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.492902 kubelet[2418]: E0702 06:57:50.492887 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.492982 kubelet[2418]: W0702 06:57:50.492899 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.492982 kubelet[2418]: E0702 06:57:50.492949 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.493155 kubelet[2418]: E0702 06:57:50.493144 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.493155 kubelet[2418]: W0702 06:57:50.493153 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.493275 kubelet[2418]: E0702 06:57:50.493224 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.493326 kubelet[2418]: E0702 06:57:50.493317 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.493326 kubelet[2418]: W0702 06:57:50.493323 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.493428 kubelet[2418]: E0702 06:57:50.493403 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.493499 kubelet[2418]: E0702 06:57:50.493487 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.493499 kubelet[2418]: W0702 06:57:50.493497 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.493606 kubelet[2418]: E0702 06:57:50.493553 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.493671 kubelet[2418]: E0702 06:57:50.493655 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.493671 kubelet[2418]: W0702 06:57:50.493665 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.493736 kubelet[2418]: E0702 06:57:50.493694 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.493826 kubelet[2418]: E0702 06:57:50.493814 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.493826 kubelet[2418]: W0702 06:57:50.493825 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.493960 kubelet[2418]: E0702 06:57:50.493844 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.494066 kubelet[2418]: E0702 06:57:50.494051 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.494066 kubelet[2418]: W0702 06:57:50.494063 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.494165 kubelet[2418]: E0702 06:57:50.494075 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.494236 kubelet[2418]: E0702 06:57:50.494226 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.494236 kubelet[2418]: W0702 06:57:50.494234 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.494349 kubelet[2418]: E0702 06:57:50.494250 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.494597 kubelet[2418]: E0702 06:57:50.494585 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.494597 kubelet[2418]: W0702 06:57:50.494596 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.494666 kubelet[2418]: E0702 06:57:50.494614 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.494801 kubelet[2418]: E0702 06:57:50.494789 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.494801 kubelet[2418]: W0702 06:57:50.494800 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.494952 kubelet[2418]: E0702 06:57:50.494826 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.495018 kubelet[2418]: E0702 06:57:50.495006 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.495060 kubelet[2418]: W0702 06:57:50.495020 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.495060 kubelet[2418]: E0702 06:57:50.495043 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.495168 kubelet[2418]: E0702 06:57:50.495156 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.495168 kubelet[2418]: W0702 06:57:50.495165 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.495242 kubelet[2418]: E0702 06:57:50.495184 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.495307 kubelet[2418]: E0702 06:57:50.495296 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.495307 kubelet[2418]: W0702 06:57:50.495305 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.495390 kubelet[2418]: E0702 06:57:50.495324 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.495487 kubelet[2418]: E0702 06:57:50.495476 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.495487 kubelet[2418]: W0702 06:57:50.495486 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.495568 kubelet[2418]: E0702 06:57:50.495504 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.495657 kubelet[2418]: E0702 06:57:50.495646 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.495657 kubelet[2418]: W0702 06:57:50.495656 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.495728 kubelet[2418]: E0702 06:57:50.495672 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.495982 kubelet[2418]: E0702 06:57:50.495968 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.496027 kubelet[2418]: W0702 06:57:50.495982 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.496027 kubelet[2418]: E0702 06:57:50.495999 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.496422 kubelet[2418]: E0702 06:57:50.496362 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.496422 kubelet[2418]: W0702 06:57:50.496393 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.496422 kubelet[2418]: E0702 06:57:50.496411 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.496626 kubelet[2418]: E0702 06:57:50.496616 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.496701 kubelet[2418]: W0702 06:57:50.496690 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.496853 kubelet[2418]: E0702 06:57:50.496838 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.497119 kubelet[2418]: E0702 06:57:50.497107 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.497119 kubelet[2418]: W0702 06:57:50.497116 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.497183 kubelet[2418]: E0702 06:57:50.497127 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.576221 kubelet[2418]: E0702 06:57:50.576182 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:50.576918 containerd[1393]: time="2024-07-02T06:57:50.576853932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b6c84484-8gcbl,Uid:aca6abe1-a7ec-424b-81e9-dd4caba28d05,Namespace:calico-system,Attempt:0,}" Jul 2 06:57:50.593652 kubelet[2418]: E0702 06:57:50.593622 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.593652 kubelet[2418]: W0702 06:57:50.593640 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.593652 kubelet[2418]: E0702 06:57:50.593659 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.593922 kubelet[2418]: E0702 06:57:50.593909 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.593922 kubelet[2418]: W0702 06:57:50.593921 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.594005 kubelet[2418]: E0702 06:57:50.593930 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.694692 kubelet[2418]: E0702 06:57:50.694648 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.694692 kubelet[2418]: W0702 06:57:50.694667 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.694692 kubelet[2418]: E0702 06:57:50.694687 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.694944 kubelet[2418]: E0702 06:57:50.694926 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.694944 kubelet[2418]: W0702 06:57:50.694940 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.695006 kubelet[2418]: E0702 06:57:50.694949 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.796395 kubelet[2418]: E0702 06:57:50.796281 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.796395 kubelet[2418]: W0702 06:57:50.796297 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.796395 kubelet[2418]: E0702 06:57:50.796318 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.796690 kubelet[2418]: E0702 06:57:50.796630 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.796690 kubelet[2418]: W0702 06:57:50.796644 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.796690 kubelet[2418]: E0702 06:57:50.796658 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.814566 kubelet[2418]: E0702 06:57:50.814481 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.814566 kubelet[2418]: W0702 06:57:50.814501 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.814566 kubelet[2418]: E0702 06:57:50.814519 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.895000 audit[2935]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2935 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:50.895000 audit[2935]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffcafa32380 a2=0 a3=7ffcafa3236c items=0 ppid=2608 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:50.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:50.897696 kubelet[2418]: E0702 06:57:50.897673 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.897696 kubelet[2418]: W0702 06:57:50.897690 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.897792 kubelet[2418]: E0702 06:57:50.897714 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:50.896000 audit[2935]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2935 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:50.896000 audit[2935]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcafa32380 a2=0 a3=0 items=0 ppid=2608 pid=2935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:50.896000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:50.998422 kubelet[2418]: E0702 06:57:50.998400 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:50.998422 kubelet[2418]: W0702 06:57:50.998418 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:50.998737 kubelet[2418]: E0702 06:57:50.998437 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:51.063576 kubelet[2418]: E0702 06:57:51.063477 2418 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:57:51.063576 kubelet[2418]: W0702 06:57:51.063492 2418 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:57:51.063576 kubelet[2418]: E0702 06:57:51.063509 2418 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:57:51.206993 containerd[1393]: time="2024-07-02T06:57:51.206903031Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:57:51.207119 containerd[1393]: time="2024-07-02T06:57:51.207008379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:51.207119 containerd[1393]: time="2024-07-02T06:57:51.207045369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:57:51.207119 containerd[1393]: time="2024-07-02T06:57:51.207073593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:51.229787 containerd[1393]: time="2024-07-02T06:57:51.228303538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:57:51.229787 containerd[1393]: time="2024-07-02T06:57:51.228391484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:51.229787 containerd[1393]: time="2024-07-02T06:57:51.228412143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:57:51.229787 containerd[1393]: time="2024-07-02T06:57:51.228425217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:51.252357 containerd[1393]: time="2024-07-02T06:57:51.252181705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pqtls,Uid:0492bf70-9894-4989-a37b-b42ca0e87244,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\"" Jul 2 06:57:51.252899 kubelet[2418]: E0702 06:57:51.252826 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:51.254314 containerd[1393]: time="2024-07-02T06:57:51.253777491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 06:57:51.278956 containerd[1393]: time="2024-07-02T06:57:51.278820273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b6c84484-8gcbl,Uid:aca6abe1-a7ec-424b-81e9-dd4caba28d05,Namespace:calico-system,Attempt:0,} returns sandbox id \"d0941128be96c810d91e73ee9eca615b9543b0be530a79b677f029c23dab0bda\"" Jul 2 06:57:51.280136 kubelet[2418]: E0702 06:57:51.280116 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:51.548797 kubelet[2418]: E0702 06:57:51.548425 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hth2l" podUID="3da56065-eacb-45a3-bb8d-c1271ca90971" Jul 2 06:57:52.190424 systemd[1]: run-containerd-runc-k8s.io-cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad-runc.7uZqek.mount: Deactivated successfully. Jul 2 06:57:52.716884 containerd[1393]: time="2024-07-02T06:57:52.716752255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:52.717805 containerd[1393]: time="2024-07-02T06:57:52.717712554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 06:57:52.718917 containerd[1393]: time="2024-07-02T06:57:52.718879743Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:52.720672 containerd[1393]: time="2024-07-02T06:57:52.720635781Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:52.722488 containerd[1393]: time="2024-07-02T06:57:52.722439789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:52.722928 containerd[1393]: time="2024-07-02T06:57:52.722891250Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.469084383s" Jul 2 06:57:52.722928 containerd[1393]: time="2024-07-02T06:57:52.722927127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 06:57:52.724147 containerd[1393]: time="2024-07-02T06:57:52.724117419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 06:57:52.724909 containerd[1393]: time="2024-07-02T06:57:52.724890165Z" level=info msg="CreateContainer within sandbox \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 06:57:52.743080 containerd[1393]: time="2024-07-02T06:57:52.743019027Z" level=info msg="CreateContainer within sandbox \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aaa1b9ad86861bd72be255676fe1b1c4a136da8d73fcbfb93382c681ae46b4f8\"" Jul 2 06:57:52.743744 containerd[1393]: time="2024-07-02T06:57:52.743705731Z" level=info msg="StartContainer for \"aaa1b9ad86861bd72be255676fe1b1c4a136da8d73fcbfb93382c681ae46b4f8\"" Jul 2 06:57:52.878677 containerd[1393]: time="2024-07-02T06:57:52.878630748Z" level=info msg="StartContainer for \"aaa1b9ad86861bd72be255676fe1b1c4a136da8d73fcbfb93382c681ae46b4f8\" returns successfully" Jul 2 06:57:53.138821 containerd[1393]: time="2024-07-02T06:57:53.138761462Z" level=info msg="shim disconnected" id=aaa1b9ad86861bd72be255676fe1b1c4a136da8d73fcbfb93382c681ae46b4f8 namespace=k8s.io Jul 2 06:57:53.138821 containerd[1393]: time="2024-07-02T06:57:53.138819352Z" level=warning msg="cleaning up after shim disconnected" id=aaa1b9ad86861bd72be255676fe1b1c4a136da8d73fcbfb93382c681ae46b4f8 namespace=k8s.io Jul 2 06:57:53.139032 containerd[1393]: time="2024-07-02T06:57:53.138827828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:53.189763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aaa1b9ad86861bd72be255676fe1b1c4a136da8d73fcbfb93382c681ae46b4f8-rootfs.mount: Deactivated successfully. Jul 2 06:57:53.549103 kubelet[2418]: E0702 06:57:53.548999 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hth2l" podUID="3da56065-eacb-45a3-bb8d-c1271ca90971" Jul 2 06:57:53.691617 containerd[1393]: time="2024-07-02T06:57:53.691549617Z" level=info msg="StopPodSandbox for \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\"" Jul 2 06:57:53.695574 containerd[1393]: time="2024-07-02T06:57:53.691634988Z" level=info msg="Container to stop \"aaa1b9ad86861bd72be255676fe1b1c4a136da8d73fcbfb93382c681ae46b4f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 06:57:53.693998 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad-shm.mount: Deactivated successfully. Jul 2 06:57:53.715276 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad-rootfs.mount: Deactivated successfully. Jul 2 06:57:53.721089 containerd[1393]: time="2024-07-02T06:57:53.721002346Z" level=info msg="shim disconnected" id=cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad namespace=k8s.io Jul 2 06:57:53.721089 containerd[1393]: time="2024-07-02T06:57:53.721058562Z" level=warning msg="cleaning up after shim disconnected" id=cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad namespace=k8s.io Jul 2 06:57:53.721089 containerd[1393]: time="2024-07-02T06:57:53.721068481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:53.732744 containerd[1393]: time="2024-07-02T06:57:53.732680159Z" level=info msg="TearDown network for sandbox \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\" successfully" Jul 2 06:57:53.732744 containerd[1393]: time="2024-07-02T06:57:53.732725645Z" level=info msg="StopPodSandbox for \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\" returns successfully" Jul 2 06:57:53.821750 kubelet[2418]: I0702 06:57:53.821127 2418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-cni-net-dir\") pod \"0492bf70-9894-4989-a37b-b42ca0e87244\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " Jul 2 06:57:53.821750 kubelet[2418]: I0702 06:57:53.821166 2418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-lib-modules\") pod \"0492bf70-9894-4989-a37b-b42ca0e87244\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " Jul 2 06:57:53.821750 kubelet[2418]: I0702 06:57:53.821186 2418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-var-run-calico\") pod \"0492bf70-9894-4989-a37b-b42ca0e87244\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " Jul 2 06:57:53.821750 kubelet[2418]: I0702 06:57:53.821206 2418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-xtables-lock\") pod \"0492bf70-9894-4989-a37b-b42ca0e87244\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " Jul 2 06:57:53.821750 kubelet[2418]: I0702 06:57:53.821223 2418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-cni-log-dir\") pod \"0492bf70-9894-4989-a37b-b42ca0e87244\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " Jul 2 06:57:53.821750 kubelet[2418]: I0702 06:57:53.821241 2418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-policysync\") pod \"0492bf70-9894-4989-a37b-b42ca0e87244\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " Jul 2 06:57:53.822073 kubelet[2418]: I0702 06:57:53.821244 2418 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0492bf70-9894-4989-a37b-b42ca0e87244" (UID: "0492bf70-9894-4989-a37b-b42ca0e87244"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.822073 kubelet[2418]: I0702 06:57:53.821270 2418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0492bf70-9894-4989-a37b-b42ca0e87244-node-certs\") pod \"0492bf70-9894-4989-a37b-b42ca0e87244\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " Jul 2 06:57:53.822073 kubelet[2418]: I0702 06:57:53.821255 2418 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "0492bf70-9894-4989-a37b-b42ca0e87244" (UID: "0492bf70-9894-4989-a37b-b42ca0e87244"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.822073 kubelet[2418]: I0702 06:57:53.821294 2418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbcf2\" (UniqueName: \"kubernetes.io/projected/0492bf70-9894-4989-a37b-b42ca0e87244-kube-api-access-pbcf2\") pod \"0492bf70-9894-4989-a37b-b42ca0e87244\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " Jul 2 06:57:53.822073 kubelet[2418]: I0702 06:57:53.821300 2418 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "0492bf70-9894-4989-a37b-b42ca0e87244" (UID: "0492bf70-9894-4989-a37b-b42ca0e87244"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.822267 kubelet[2418]: I0702 06:57:53.821316 2418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-var-lib-calico\") pod \"0492bf70-9894-4989-a37b-b42ca0e87244\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " Jul 2 06:57:53.822267 kubelet[2418]: I0702 06:57:53.821325 2418 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "0492bf70-9894-4989-a37b-b42ca0e87244" (UID: "0492bf70-9894-4989-a37b-b42ca0e87244"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.822267 kubelet[2418]: I0702 06:57:53.821335 2418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-flexvol-driver-host\") pod \"0492bf70-9894-4989-a37b-b42ca0e87244\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " Jul 2 06:57:53.822267 kubelet[2418]: I0702 06:57:53.821345 2418 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0492bf70-9894-4989-a37b-b42ca0e87244" (UID: "0492bf70-9894-4989-a37b-b42ca0e87244"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.822267 kubelet[2418]: I0702 06:57:53.821354 2418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0492bf70-9894-4989-a37b-b42ca0e87244-tigera-ca-bundle\") pod \"0492bf70-9894-4989-a37b-b42ca0e87244\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " Jul 2 06:57:53.822449 kubelet[2418]: I0702 06:57:53.821383 2418 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-cni-bin-dir\") pod \"0492bf70-9894-4989-a37b-b42ca0e87244\" (UID: \"0492bf70-9894-4989-a37b-b42ca0e87244\") " Jul 2 06:57:53.822449 kubelet[2418]: I0702 06:57:53.821396 2418 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-policysync" (OuterVolumeSpecName: "policysync") pod "0492bf70-9894-4989-a37b-b42ca0e87244" (UID: "0492bf70-9894-4989-a37b-b42ca0e87244"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.822449 kubelet[2418]: I0702 06:57:53.821418 2418 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "0492bf70-9894-4989-a37b-b42ca0e87244" (UID: "0492bf70-9894-4989-a37b-b42ca0e87244"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.822449 kubelet[2418]: I0702 06:57:53.821427 2418 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 06:57:53.822449 kubelet[2418]: I0702 06:57:53.821440 2418 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 06:57:53.822449 kubelet[2418]: I0702 06:57:53.821448 2418 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jul 2 06:57:53.822449 kubelet[2418]: I0702 06:57:53.821456 2418 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 06:57:53.822680 kubelet[2418]: I0702 06:57:53.821464 2418 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 06:57:53.822680 kubelet[2418]: I0702 06:57:53.821480 2418 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "0492bf70-9894-4989-a37b-b42ca0e87244" (UID: "0492bf70-9894-4989-a37b-b42ca0e87244"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.822680 kubelet[2418]: I0702 06:57:53.821693 2418 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "0492bf70-9894-4989-a37b-b42ca0e87244" (UID: "0492bf70-9894-4989-a37b-b42ca0e87244"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:57:53.822680 kubelet[2418]: I0702 06:57:53.821801 2418 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0492bf70-9894-4989-a37b-b42ca0e87244-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "0492bf70-9894-4989-a37b-b42ca0e87244" (UID: "0492bf70-9894-4989-a37b-b42ca0e87244"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 06:57:53.824018 kubelet[2418]: I0702 06:57:53.823961 2418 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0492bf70-9894-4989-a37b-b42ca0e87244-node-certs" (OuterVolumeSpecName: "node-certs") pod "0492bf70-9894-4989-a37b-b42ca0e87244" (UID: "0492bf70-9894-4989-a37b-b42ca0e87244"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 06:57:53.824270 kubelet[2418]: I0702 06:57:53.824242 2418 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0492bf70-9894-4989-a37b-b42ca0e87244-kube-api-access-pbcf2" (OuterVolumeSpecName: "kube-api-access-pbcf2") pod "0492bf70-9894-4989-a37b-b42ca0e87244" (UID: "0492bf70-9894-4989-a37b-b42ca0e87244"). InnerVolumeSpecName "kube-api-access-pbcf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 06:57:53.825486 systemd[1]: var-lib-kubelet-pods-0492bf70\x2d9894\x2d4989\x2da37b\x2db42ca0e87244-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpbcf2.mount: Deactivated successfully. Jul 2 06:57:53.825639 systemd[1]: var-lib-kubelet-pods-0492bf70\x2d9894\x2d4989\x2da37b\x2db42ca0e87244-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jul 2 06:57:53.922155 kubelet[2418]: I0702 06:57:53.922118 2418 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-pbcf2\" (UniqueName: \"kubernetes.io/projected/0492bf70-9894-4989-a37b-b42ca0e87244-kube-api-access-pbcf2\") on node \"localhost\" DevicePath \"\"" Jul 2 06:57:53.922155 kubelet[2418]: I0702 06:57:53.922164 2418 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jul 2 06:57:53.922155 kubelet[2418]: I0702 06:57:53.922183 2418 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jul 2 06:57:53.922155 kubelet[2418]: I0702 06:57:53.922196 2418 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0492bf70-9894-4989-a37b-b42ca0e87244-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 2 06:57:53.922155 kubelet[2418]: I0702 06:57:53.922209 2418 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 06:57:53.922155 kubelet[2418]: I0702 06:57:53.922221 2418 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0492bf70-9894-4989-a37b-b42ca0e87244-policysync\") on node \"localhost\" DevicePath \"\"" Jul 2 06:57:53.922155 kubelet[2418]: I0702 06:57:53.922234 2418 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0492bf70-9894-4989-a37b-b42ca0e87244-node-certs\") on node \"localhost\" DevicePath \"\"" Jul 2 06:57:54.604538 containerd[1393]: time="2024-07-02T06:57:54.604481327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:54.607991 containerd[1393]: time="2024-07-02T06:57:54.607820664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 06:57:54.609128 containerd[1393]: time="2024-07-02T06:57:54.609090404Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:54.610735 containerd[1393]: time="2024-07-02T06:57:54.610712609Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:54.612636 containerd[1393]: time="2024-07-02T06:57:54.612606885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:57:54.613513 containerd[1393]: time="2024-07-02T06:57:54.613329687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 1.889179415s" Jul 2 06:57:54.613513 containerd[1393]: time="2024-07-02T06:57:54.613396963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 06:57:54.632347 containerd[1393]: time="2024-07-02T06:57:54.632285828Z" level=info msg="CreateContainer within sandbox \"d0941128be96c810d91e73ee9eca615b9543b0be530a79b677f029c23dab0bda\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 06:57:54.647589 containerd[1393]: time="2024-07-02T06:57:54.647535038Z" level=info msg="CreateContainer within sandbox \"d0941128be96c810d91e73ee9eca615b9543b0be530a79b677f029c23dab0bda\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e5c88552728f53f759600d88c2175937c440fb2540e22306252e2d6f9ad9cbaa\"" Jul 2 06:57:54.648273 containerd[1393]: time="2024-07-02T06:57:54.648232863Z" level=info msg="StartContainer for \"e5c88552728f53f759600d88c2175937c440fb2540e22306252e2d6f9ad9cbaa\"" Jul 2 06:57:54.692705 kubelet[2418]: I0702 06:57:54.692510 2418 scope.go:117] "RemoveContainer" containerID="aaa1b9ad86861bd72be255676fe1b1c4a136da8d73fcbfb93382c681ae46b4f8" Jul 2 06:57:54.697496 containerd[1393]: time="2024-07-02T06:57:54.697451813Z" level=info msg="RemoveContainer for \"aaa1b9ad86861bd72be255676fe1b1c4a136da8d73fcbfb93382c681ae46b4f8\"" Jul 2 06:57:54.702669 containerd[1393]: time="2024-07-02T06:57:54.702629742Z" level=info msg="RemoveContainer for \"aaa1b9ad86861bd72be255676fe1b1c4a136da8d73fcbfb93382c681ae46b4f8\" returns successfully" Jul 2 06:57:54.717100 containerd[1393]: time="2024-07-02T06:57:54.717054010Z" level=info msg="StartContainer for \"e5c88552728f53f759600d88c2175937c440fb2540e22306252e2d6f9ad9cbaa\" returns successfully" Jul 2 06:57:54.750481 kubelet[2418]: I0702 06:57:54.750433 2418 topology_manager.go:215] "Topology Admit Handler" podUID="adb1477f-769f-47f4-be31-c21edd9b931e" podNamespace="calico-system" podName="calico-node-2sfgk" Jul 2 06:57:54.751058 kubelet[2418]: E0702 06:57:54.751034 2418 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0492bf70-9894-4989-a37b-b42ca0e87244" containerName="flexvol-driver" Jul 2 06:57:54.751116 kubelet[2418]: I0702 06:57:54.751079 2418 memory_manager.go:346] "RemoveStaleState removing state" podUID="0492bf70-9894-4989-a37b-b42ca0e87244" containerName="flexvol-driver" Jul 2 06:57:54.831925 kubelet[2418]: I0702 06:57:54.831884 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/adb1477f-769f-47f4-be31-c21edd9b931e-node-certs\") pod \"calico-node-2sfgk\" (UID: \"adb1477f-769f-47f4-be31-c21edd9b931e\") " pod="calico-system/calico-node-2sfgk" Jul 2 06:57:54.832100 kubelet[2418]: I0702 06:57:54.831953 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/adb1477f-769f-47f4-be31-c21edd9b931e-var-run-calico\") pod \"calico-node-2sfgk\" (UID: \"adb1477f-769f-47f4-be31-c21edd9b931e\") " pod="calico-system/calico-node-2sfgk" Jul 2 06:57:54.832100 kubelet[2418]: I0702 06:57:54.831981 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/adb1477f-769f-47f4-be31-c21edd9b931e-cni-log-dir\") pod \"calico-node-2sfgk\" (UID: \"adb1477f-769f-47f4-be31-c21edd9b931e\") " pod="calico-system/calico-node-2sfgk" Jul 2 06:57:54.832100 kubelet[2418]: I0702 06:57:54.832000 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt9zq\" (UniqueName: \"kubernetes.io/projected/adb1477f-769f-47f4-be31-c21edd9b931e-kube-api-access-bt9zq\") pod \"calico-node-2sfgk\" (UID: \"adb1477f-769f-47f4-be31-c21edd9b931e\") " pod="calico-system/calico-node-2sfgk" Jul 2 06:57:54.832100 kubelet[2418]: I0702 06:57:54.832017 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/adb1477f-769f-47f4-be31-c21edd9b931e-flexvol-driver-host\") pod \"calico-node-2sfgk\" (UID: \"adb1477f-769f-47f4-be31-c21edd9b931e\") " pod="calico-system/calico-node-2sfgk" Jul 2 06:57:54.832100 kubelet[2418]: I0702 06:57:54.832037 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/adb1477f-769f-47f4-be31-c21edd9b931e-var-lib-calico\") pod \"calico-node-2sfgk\" (UID: \"adb1477f-769f-47f4-be31-c21edd9b931e\") " pod="calico-system/calico-node-2sfgk" Jul 2 06:57:54.832248 kubelet[2418]: I0702 06:57:54.832063 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adb1477f-769f-47f4-be31-c21edd9b931e-xtables-lock\") pod \"calico-node-2sfgk\" (UID: \"adb1477f-769f-47f4-be31-c21edd9b931e\") " pod="calico-system/calico-node-2sfgk" Jul 2 06:57:54.832248 kubelet[2418]: I0702 06:57:54.832080 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/adb1477f-769f-47f4-be31-c21edd9b931e-cni-bin-dir\") pod \"calico-node-2sfgk\" (UID: \"adb1477f-769f-47f4-be31-c21edd9b931e\") " pod="calico-system/calico-node-2sfgk" Jul 2 06:57:54.832248 kubelet[2418]: I0702 06:57:54.832100 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/adb1477f-769f-47f4-be31-c21edd9b931e-policysync\") pod \"calico-node-2sfgk\" (UID: \"adb1477f-769f-47f4-be31-c21edd9b931e\") " pod="calico-system/calico-node-2sfgk" Jul 2 06:57:54.832248 kubelet[2418]: I0702 06:57:54.832118 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/adb1477f-769f-47f4-be31-c21edd9b931e-cni-net-dir\") pod \"calico-node-2sfgk\" (UID: \"adb1477f-769f-47f4-be31-c21edd9b931e\") " pod="calico-system/calico-node-2sfgk" Jul 2 06:57:54.832248 kubelet[2418]: I0702 06:57:54.832136 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adb1477f-769f-47f4-be31-c21edd9b931e-lib-modules\") pod \"calico-node-2sfgk\" (UID: \"adb1477f-769f-47f4-be31-c21edd9b931e\") " pod="calico-system/calico-node-2sfgk" Jul 2 06:57:54.832420 kubelet[2418]: I0702 06:57:54.832152 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/adb1477f-769f-47f4-be31-c21edd9b931e-tigera-ca-bundle\") pod \"calico-node-2sfgk\" (UID: \"adb1477f-769f-47f4-be31-c21edd9b931e\") " pod="calico-system/calico-node-2sfgk" Jul 2 06:57:55.055671 kubelet[2418]: E0702 06:57:55.055631 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:55.056401 containerd[1393]: time="2024-07-02T06:57:55.056333296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2sfgk,Uid:adb1477f-769f-47f4-be31-c21edd9b931e,Namespace:calico-system,Attempt:0,}" Jul 2 06:57:55.077942 containerd[1393]: time="2024-07-02T06:57:55.077840680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:57:55.077942 containerd[1393]: time="2024-07-02T06:57:55.077897888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:55.077942 containerd[1393]: time="2024-07-02T06:57:55.077911273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:57:55.077942 containerd[1393]: time="2024-07-02T06:57:55.077921182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:57:55.121014 containerd[1393]: time="2024-07-02T06:57:55.120960467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2sfgk,Uid:adb1477f-769f-47f4-be31-c21edd9b931e,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d57be58ea67766b84fcd5157e22e253599e6132a3c4cfa7360d50cb8cee6b1d\"" Jul 2 06:57:55.121975 kubelet[2418]: E0702 06:57:55.121953 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:55.125020 containerd[1393]: time="2024-07-02T06:57:55.124969384Z" level=info msg="CreateContainer within sandbox \"7d57be58ea67766b84fcd5157e22e253599e6132a3c4cfa7360d50cb8cee6b1d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 06:57:55.143188 containerd[1393]: time="2024-07-02T06:57:55.143134166Z" level=info msg="CreateContainer within sandbox \"7d57be58ea67766b84fcd5157e22e253599e6132a3c4cfa7360d50cb8cee6b1d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9f7c8c114a6ceb07c3bc38a7008ee5ce71409fc9e42b60e7e02ea6435132dce6\"" Jul 2 06:57:55.143641 containerd[1393]: time="2024-07-02T06:57:55.143603369Z" level=info msg="StartContainer for \"9f7c8c114a6ceb07c3bc38a7008ee5ce71409fc9e42b60e7e02ea6435132dce6\"" Jul 2 06:57:55.456648 containerd[1393]: time="2024-07-02T06:57:55.456589870Z" level=info msg="StartContainer for \"9f7c8c114a6ceb07c3bc38a7008ee5ce71409fc9e42b60e7e02ea6435132dce6\" returns successfully" Jul 2 06:57:55.549086 kubelet[2418]: E0702 06:57:55.549041 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hth2l" podUID="3da56065-eacb-45a3-bb8d-c1271ca90971" Jul 2 06:57:56.455165 kubelet[2418]: I0702 06:57:56.455109 2418 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0492bf70-9894-4989-a37b-b42ca0e87244" path="/var/lib/kubelet/pods/0492bf70-9894-4989-a37b-b42ca0e87244/volumes" Jul 2 06:57:56.459618 kubelet[2418]: E0702 06:57:56.459589 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:56.460787 kubelet[2418]: E0702 06:57:56.460755 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:57.113442 containerd[1393]: time="2024-07-02T06:57:57.113344103Z" level=info msg="shim disconnected" id=9f7c8c114a6ceb07c3bc38a7008ee5ce71409fc9e42b60e7e02ea6435132dce6 namespace=k8s.io Jul 2 06:57:57.113442 containerd[1393]: time="2024-07-02T06:57:57.113434674Z" level=warning msg="cleaning up after shim disconnected" id=9f7c8c114a6ceb07c3bc38a7008ee5ce71409fc9e42b60e7e02ea6435132dce6 namespace=k8s.io Jul 2 06:57:57.113442 containerd[1393]: time="2024-07-02T06:57:57.113445995Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:57:57.259000 audit[3292]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:57.276013 kernel: kauditd_printk_skb: 8 callbacks suppressed Jul 2 06:57:57.276190 kernel: audit: type=1325 audit(1719903477.259:265): table=filter:95 family=2 entries=15 op=nft_register_rule pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:57.259000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffc79ddfad0 a2=0 a3=7ffc79ddfabc items=0 ppid=2608 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:57.282333 kernel: audit: type=1300 audit(1719903477.259:265): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffc79ddfad0 a2=0 a3=7ffc79ddfabc items=0 ppid=2608 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:57.282458 kernel: audit: type=1327 audit(1719903477.259:265): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:57.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:57.283974 kernel: audit: type=1325 audit(1719903477.259:266): table=nat:96 family=2 entries=19 op=nft_register_chain pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:57.259000 audit[3292]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:57:57.285685 kernel: audit: type=1300 audit(1719903477.259:266): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc79ddfad0 a2=0 a3=7ffc79ddfabc items=0 ppid=2608 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:57.259000 audit[3292]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc79ddfad0 a2=0 a3=7ffc79ddfabc items=0 ppid=2608 pid=3292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:57.289276 kernel: audit: type=1327 audit(1719903477.259:266): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:57.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:57:57.463558 kubelet[2418]: E0702 06:57:57.463194 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:57.463558 kubelet[2418]: E0702 06:57:57.463322 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:57.466827 containerd[1393]: time="2024-07-02T06:57:57.466778942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 06:57:57.518090 kubelet[2418]: I0702 06:57:57.518057 2418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5b6c84484-8gcbl" podStartSLOduration=5.185579307 podCreationTimestamp="2024-07-02 06:57:49 +0000 UTC" firstStartedPulling="2024-07-02 06:57:51.281303221 +0000 UTC m=+21.819521278" lastFinishedPulling="2024-07-02 06:57:54.61373503 +0000 UTC m=+25.151953087" observedRunningTime="2024-07-02 06:57:57.055636927 +0000 UTC m=+27.593854984" watchObservedRunningTime="2024-07-02 06:57:57.518011116 +0000 UTC m=+28.056229183" Jul 2 06:57:57.548568 kubelet[2418]: E0702 06:57:57.548515 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hth2l" podUID="3da56065-eacb-45a3-bb8d-c1271ca90971" Jul 2 06:57:58.464322 kubelet[2418]: E0702 06:57:58.464292 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:57:59.733048 kubelet[2418]: E0702 06:57:59.733010 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hth2l" podUID="3da56065-eacb-45a3-bb8d-c1271ca90971" Jul 2 06:57:59.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.85:22-10.0.0.1:44132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:59.739832 systemd[1]: Started sshd@7-10.0.0.85:22-10.0.0.1:44132.service - OpenSSH per-connection server daemon (10.0.0.1:44132). Jul 2 06:57:59.743409 kernel: audit: type=1130 audit(1719903479.739:267): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.85:22-10.0.0.1:44132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:59.772000 audit[3297]: USER_ACCT pid=3297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:57:59.772870 sshd[3297]: Accepted publickey for core from 10.0.0.1 port 44132 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:57:59.774427 sshd[3297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:57:59.780576 kernel: audit: type=1101 audit(1719903479.772:268): pid=3297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:57:59.780733 kernel: audit: type=1103 audit(1719903479.773:269): pid=3297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:57:59.780763 kernel: audit: type=1006 audit(1719903479.773:270): pid=3297 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jul 2 06:57:59.773000 audit[3297]: CRED_ACQ pid=3297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:57:59.773000 audit[3297]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdcf9ae9b0 a2=3 a3=7fc1ab43b480 items=0 ppid=1 pid=3297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:57:59.773000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:57:59.785242 systemd-logind[1375]: New session 8 of user core. Jul 2 06:57:59.789764 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 06:57:59.796000 audit[3297]: USER_START pid=3297 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:57:59.797000 audit[3300]: CRED_ACQ pid=3300 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:57:59.924607 sshd[3297]: pam_unix(sshd:session): session closed for user core Jul 2 06:57:59.925000 audit[3297]: USER_END pid=3297 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:57:59.925000 audit[3297]: CRED_DISP pid=3297 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:57:59.927684 systemd[1]: sshd@7-10.0.0.85:22-10.0.0.1:44132.service: Deactivated successfully. Jul 2 06:57:59.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.85:22-10.0.0.1:44132 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:57:59.928856 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 06:57:59.929606 systemd-logind[1375]: Session 8 logged out. Waiting for processes to exit. Jul 2 06:57:59.930876 systemd-logind[1375]: Removed session 8. Jul 2 06:58:01.548949 kubelet[2418]: E0702 06:58:01.548913 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hth2l" podUID="3da56065-eacb-45a3-bb8d-c1271ca90971" Jul 2 06:58:01.674592 containerd[1393]: time="2024-07-02T06:58:01.674528244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:01.675511 containerd[1393]: time="2024-07-02T06:58:01.675463753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 06:58:01.677350 containerd[1393]: time="2024-07-02T06:58:01.677321285Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:01.679359 containerd[1393]: time="2024-07-02T06:58:01.679319653Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:01.681106 containerd[1393]: time="2024-07-02T06:58:01.681047341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:01.681682 containerd[1393]: time="2024-07-02T06:58:01.681633273Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 4.214815287s" Jul 2 06:58:01.681682 containerd[1393]: time="2024-07-02T06:58:01.681676794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 06:58:01.683640 containerd[1393]: time="2024-07-02T06:58:01.683601323Z" level=info msg="CreateContainer within sandbox \"7d57be58ea67766b84fcd5157e22e253599e6132a3c4cfa7360d50cb8cee6b1d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 06:58:01.697770 containerd[1393]: time="2024-07-02T06:58:01.697725067Z" level=info msg="CreateContainer within sandbox \"7d57be58ea67766b84fcd5157e22e253599e6132a3c4cfa7360d50cb8cee6b1d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"278333dbfc598ee3198696308e510f62c22b1af35c1543e5902f26d29a4871e2\"" Jul 2 06:58:01.698258 containerd[1393]: time="2024-07-02T06:58:01.698232171Z" level=info msg="StartContainer for \"278333dbfc598ee3198696308e510f62c22b1af35c1543e5902f26d29a4871e2\"" Jul 2 06:58:01.821782 containerd[1393]: time="2024-07-02T06:58:01.821671078Z" level=info msg="StartContainer for \"278333dbfc598ee3198696308e510f62c22b1af35c1543e5902f26d29a4871e2\" returns successfully" Jul 2 06:58:02.740119 kubelet[2418]: E0702 06:58:02.740076 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:02.780707 containerd[1393]: time="2024-07-02T06:58:02.780645061Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 06:58:02.802774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-278333dbfc598ee3198696308e510f62c22b1af35c1543e5902f26d29a4871e2-rootfs.mount: Deactivated successfully. Jul 2 06:58:02.805557 kubelet[2418]: I0702 06:58:02.805531 2418 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 06:58:02.807679 containerd[1393]: time="2024-07-02T06:58:02.807518497Z" level=info msg="shim disconnected" id=278333dbfc598ee3198696308e510f62c22b1af35c1543e5902f26d29a4871e2 namespace=k8s.io Jul 2 06:58:02.807679 containerd[1393]: time="2024-07-02T06:58:02.807601253Z" level=warning msg="cleaning up after shim disconnected" id=278333dbfc598ee3198696308e510f62c22b1af35c1543e5902f26d29a4871e2 namespace=k8s.io Jul 2 06:58:02.807679 containerd[1393]: time="2024-07-02T06:58:02.807613065Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:58:02.826420 kubelet[2418]: I0702 06:58:02.823906 2418 topology_manager.go:215] "Topology Admit Handler" podUID="48ddd97c-85ff-49ef-8095-30d9677f14bd" podNamespace="kube-system" podName="coredns-5dd5756b68-5tvlc" Jul 2 06:58:02.827928 kubelet[2418]: I0702 06:58:02.827816 2418 topology_manager.go:215] "Topology Admit Handler" podUID="9c276f6d-ef93-47c1-b679-37a0af5e9a64" podNamespace="calico-system" podName="calico-kube-controllers-d8f857889-tt9xj" Jul 2 06:58:02.828048 kubelet[2418]: I0702 06:58:02.828026 2418 topology_manager.go:215] "Topology Admit Handler" podUID="9c333969-8c44-45e3-a4bd-3452f33a72a4" podNamespace="kube-system" podName="coredns-5dd5756b68-22sns" Jul 2 06:58:02.835212 kubelet[2418]: I0702 06:58:02.835182 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/48ddd97c-85ff-49ef-8095-30d9677f14bd-config-volume\") pod \"coredns-5dd5756b68-5tvlc\" (UID: \"48ddd97c-85ff-49ef-8095-30d9677f14bd\") " pod="kube-system/coredns-5dd5756b68-5tvlc" Jul 2 06:58:02.835212 kubelet[2418]: I0702 06:58:02.835222 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c333969-8c44-45e3-a4bd-3452f33a72a4-config-volume\") pod \"coredns-5dd5756b68-22sns\" (UID: \"9c333969-8c44-45e3-a4bd-3452f33a72a4\") " pod="kube-system/coredns-5dd5756b68-22sns" Jul 2 06:58:02.835443 kubelet[2418]: I0702 06:58:02.835247 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v5vh\" (UniqueName: \"kubernetes.io/projected/48ddd97c-85ff-49ef-8095-30d9677f14bd-kube-api-access-7v5vh\") pod \"coredns-5dd5756b68-5tvlc\" (UID: \"48ddd97c-85ff-49ef-8095-30d9677f14bd\") " pod="kube-system/coredns-5dd5756b68-5tvlc" Jul 2 06:58:02.835443 kubelet[2418]: I0702 06:58:02.835338 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stjsr\" (UniqueName: \"kubernetes.io/projected/9c276f6d-ef93-47c1-b679-37a0af5e9a64-kube-api-access-stjsr\") pod \"calico-kube-controllers-d8f857889-tt9xj\" (UID: \"9c276f6d-ef93-47c1-b679-37a0af5e9a64\") " pod="calico-system/calico-kube-controllers-d8f857889-tt9xj" Jul 2 06:58:02.835443 kubelet[2418]: I0702 06:58:02.835408 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkfxd\" (UniqueName: \"kubernetes.io/projected/9c333969-8c44-45e3-a4bd-3452f33a72a4-kube-api-access-fkfxd\") pod \"coredns-5dd5756b68-22sns\" (UID: \"9c333969-8c44-45e3-a4bd-3452f33a72a4\") " pod="kube-system/coredns-5dd5756b68-22sns" Jul 2 06:58:02.835514 kubelet[2418]: I0702 06:58:02.835453 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c276f6d-ef93-47c1-b679-37a0af5e9a64-tigera-ca-bundle\") pod \"calico-kube-controllers-d8f857889-tt9xj\" (UID: \"9c276f6d-ef93-47c1-b679-37a0af5e9a64\") " pod="calico-system/calico-kube-controllers-d8f857889-tt9xj" Jul 2 06:58:03.126392 kubelet[2418]: E0702 06:58:03.126338 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:03.127178 containerd[1393]: time="2024-07-02T06:58:03.126914646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5tvlc,Uid:48ddd97c-85ff-49ef-8095-30d9677f14bd,Namespace:kube-system,Attempt:0,}" Jul 2 06:58:03.132442 kubelet[2418]: E0702 06:58:03.132309 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:03.132974 containerd[1393]: time="2024-07-02T06:58:03.132915064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d8f857889-tt9xj,Uid:9c276f6d-ef93-47c1-b679-37a0af5e9a64,Namespace:calico-system,Attempt:0,}" Jul 2 06:58:03.133127 containerd[1393]: time="2024-07-02T06:58:03.132991388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-22sns,Uid:9c333969-8c44-45e3-a4bd-3452f33a72a4,Namespace:kube-system,Attempt:0,}" Jul 2 06:58:03.321647 containerd[1393]: time="2024-07-02T06:58:03.321519622Z" level=error msg="Failed to destroy network for sandbox \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.322018 containerd[1393]: time="2024-07-02T06:58:03.321963817Z" level=error msg="encountered an error cleaning up failed sandbox \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.322099 containerd[1393]: time="2024-07-02T06:58:03.322038227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5tvlc,Uid:48ddd97c-85ff-49ef-8095-30d9677f14bd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.322641 containerd[1393]: time="2024-07-02T06:58:03.322550530Z" level=error msg="Failed to destroy network for sandbox \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.323073 containerd[1393]: time="2024-07-02T06:58:03.323040371Z" level=error msg="encountered an error cleaning up failed sandbox \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.323199 containerd[1393]: time="2024-07-02T06:58:03.323169503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-22sns,Uid:9c333969-8c44-45e3-a4bd-3452f33a72a4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.323416 containerd[1393]: time="2024-07-02T06:58:03.323226450Z" level=error msg="Failed to destroy network for sandbox \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.323752 containerd[1393]: time="2024-07-02T06:58:03.323717633Z" level=error msg="encountered an error cleaning up failed sandbox \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.323796 containerd[1393]: time="2024-07-02T06:58:03.323769442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d8f857889-tt9xj,Uid:9c276f6d-ef93-47c1-b679-37a0af5e9a64,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.330184 kubelet[2418]: E0702 06:58:03.330130 2418 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.330392 kubelet[2418]: E0702 06:58:03.330191 2418 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.330392 kubelet[2418]: E0702 06:58:03.330211 2418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-5tvlc" Jul 2 06:58:03.330392 kubelet[2418]: E0702 06:58:03.330238 2418 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-5tvlc" Jul 2 06:58:03.330392 kubelet[2418]: E0702 06:58:03.330246 2418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-22sns" Jul 2 06:58:03.330534 kubelet[2418]: E0702 06:58:03.330130 2418 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.330534 kubelet[2418]: E0702 06:58:03.330266 2418 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-22sns" Jul 2 06:58:03.330534 kubelet[2418]: E0702 06:58:03.330293 2418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d8f857889-tt9xj" Jul 2 06:58:03.330640 kubelet[2418]: E0702 06:58:03.330301 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-5tvlc_kube-system(48ddd97c-85ff-49ef-8095-30d9677f14bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-5tvlc_kube-system(48ddd97c-85ff-49ef-8095-30d9677f14bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-5tvlc" podUID="48ddd97c-85ff-49ef-8095-30d9677f14bd" Jul 2 06:58:03.330640 kubelet[2418]: E0702 06:58:03.330315 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-22sns_kube-system(9c333969-8c44-45e3-a4bd-3452f33a72a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-22sns_kube-system(9c333969-8c44-45e3-a4bd-3452f33a72a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-22sns" podUID="9c333969-8c44-45e3-a4bd-3452f33a72a4" Jul 2 06:58:03.330640 kubelet[2418]: E0702 06:58:03.330316 2418 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d8f857889-tt9xj" Jul 2 06:58:03.330849 kubelet[2418]: E0702 06:58:03.330364 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d8f857889-tt9xj_calico-system(9c276f6d-ef93-47c1-b679-37a0af5e9a64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d8f857889-tt9xj_calico-system(9c276f6d-ef93-47c1-b679-37a0af5e9a64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d8f857889-tt9xj" podUID="9c276f6d-ef93-47c1-b679-37a0af5e9a64" Jul 2 06:58:03.552054 containerd[1393]: time="2024-07-02T06:58:03.551399770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hth2l,Uid:3da56065-eacb-45a3-bb8d-c1271ca90971,Namespace:calico-system,Attempt:0,}" Jul 2 06:58:03.606833 containerd[1393]: time="2024-07-02T06:58:03.606760021Z" level=error msg="Failed to destroy network for sandbox \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.607175 containerd[1393]: time="2024-07-02T06:58:03.607139665Z" level=error msg="encountered an error cleaning up failed sandbox \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.607223 containerd[1393]: time="2024-07-02T06:58:03.607190029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hth2l,Uid:3da56065-eacb-45a3-bb8d-c1271ca90971,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.607540 kubelet[2418]: E0702 06:58:03.607506 2418 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.607632 kubelet[2418]: E0702 06:58:03.607571 2418 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hth2l" Jul 2 06:58:03.607632 kubelet[2418]: E0702 06:58:03.607602 2418 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hth2l" Jul 2 06:58:03.607723 kubelet[2418]: E0702 06:58:03.607693 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hth2l_calico-system(3da56065-eacb-45a3-bb8d-c1271ca90971)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hth2l_calico-system(3da56065-eacb-45a3-bb8d-c1271ca90971)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hth2l" podUID="3da56065-eacb-45a3-bb8d-c1271ca90971" Jul 2 06:58:03.743532 kubelet[2418]: I0702 06:58:03.743481 2418 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:03.744168 containerd[1393]: time="2024-07-02T06:58:03.744117439Z" level=info msg="StopPodSandbox for \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\"" Jul 2 06:58:03.744168 containerd[1393]: time="2024-07-02T06:58:03.744355216Z" level=info msg="Ensure that sandbox 353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc in task-service has been cleanup successfully" Jul 2 06:58:03.744547 kubelet[2418]: I0702 06:58:03.744523 2418 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:03.745158 containerd[1393]: time="2024-07-02T06:58:03.745113591Z" level=info msg="StopPodSandbox for \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\"" Jul 2 06:58:03.745366 containerd[1393]: time="2024-07-02T06:58:03.745343713Z" level=info msg="Ensure that sandbox 3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70 in task-service has been cleanup successfully" Jul 2 06:58:03.746464 kubelet[2418]: I0702 06:58:03.746425 2418 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:03.747875 containerd[1393]: time="2024-07-02T06:58:03.747824366Z" level=info msg="StopPodSandbox for \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\"" Jul 2 06:58:03.748110 containerd[1393]: time="2024-07-02T06:58:03.748086558Z" level=info msg="Ensure that sandbox 12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f in task-service has been cleanup successfully" Jul 2 06:58:03.750955 kubelet[2418]: E0702 06:58:03.750668 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:03.753118 containerd[1393]: time="2024-07-02T06:58:03.752960730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 06:58:03.762172 kubelet[2418]: I0702 06:58:03.759113 2418 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:03.762360 containerd[1393]: time="2024-07-02T06:58:03.759699496Z" level=info msg="StopPodSandbox for \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\"" Jul 2 06:58:03.762360 containerd[1393]: time="2024-07-02T06:58:03.759911775Z" level=info msg="Ensure that sandbox ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf in task-service has been cleanup successfully" Jul 2 06:58:03.792483 containerd[1393]: time="2024-07-02T06:58:03.792409615Z" level=error msg="StopPodSandbox for \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\" failed" error="failed to destroy network for sandbox \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.793010 containerd[1393]: time="2024-07-02T06:58:03.792479907Z" level=error msg="StopPodSandbox for \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\" failed" error="failed to destroy network for sandbox \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.793306 kubelet[2418]: E0702 06:58:03.793275 2418 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:03.793385 kubelet[2418]: E0702 06:58:03.793351 2418 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70"} Jul 2 06:58:03.793423 kubelet[2418]: E0702 06:58:03.793414 2418 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c276f6d-ef93-47c1-b679-37a0af5e9a64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 06:58:03.793500 kubelet[2418]: E0702 06:58:03.793452 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c276f6d-ef93-47c1-b679-37a0af5e9a64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d8f857889-tt9xj" podUID="9c276f6d-ef93-47c1-b679-37a0af5e9a64" Jul 2 06:58:03.793500 kubelet[2418]: E0702 06:58:03.793491 2418 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:03.793588 kubelet[2418]: E0702 06:58:03.793505 2418 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc"} Jul 2 06:58:03.793588 kubelet[2418]: E0702 06:58:03.793538 2418 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"48ddd97c-85ff-49ef-8095-30d9677f14bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 06:58:03.793588 kubelet[2418]: E0702 06:58:03.793575 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"48ddd97c-85ff-49ef-8095-30d9677f14bd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-5tvlc" podUID="48ddd97c-85ff-49ef-8095-30d9677f14bd" Jul 2 06:58:03.796741 containerd[1393]: time="2024-07-02T06:58:03.796683127Z" level=error msg="StopPodSandbox for \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\" failed" error="failed to destroy network for sandbox \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.796904 kubelet[2418]: E0702 06:58:03.796885 2418 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:03.796959 kubelet[2418]: E0702 06:58:03.796915 2418 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f"} Jul 2 06:58:03.796959 kubelet[2418]: E0702 06:58:03.796953 2418 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c333969-8c44-45e3-a4bd-3452f33a72a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 06:58:03.797050 kubelet[2418]: E0702 06:58:03.796986 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c333969-8c44-45e3-a4bd-3452f33a72a4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-22sns" podUID="9c333969-8c44-45e3-a4bd-3452f33a72a4" Jul 2 06:58:03.804776 containerd[1393]: time="2024-07-02T06:58:03.804664158Z" level=error msg="StopPodSandbox for \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\" failed" error="failed to destroy network for sandbox \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:58:03.804867 kubelet[2418]: E0702 06:58:03.804848 2418 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:03.804916 kubelet[2418]: E0702 06:58:03.804878 2418 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf"} Jul 2 06:58:03.804916 kubelet[2418]: E0702 06:58:03.804905 2418 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3da56065-eacb-45a3-bb8d-c1271ca90971\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 06:58:03.804916 kubelet[2418]: E0702 06:58:03.804929 2418 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3da56065-eacb-45a3-bb8d-c1271ca90971\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hth2l" podUID="3da56065-eacb-45a3-bb8d-c1271ca90971" Jul 2 06:58:03.805046 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc-shm.mount: Deactivated successfully. Jul 2 06:58:04.937677 systemd[1]: Started sshd@8-10.0.0.85:22-10.0.0.1:48478.service - OpenSSH per-connection server daemon (10.0.0.1:48478). Jul 2 06:58:04.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.85:22-10.0.0.1:48478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:04.950451 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 2 06:58:04.950532 kernel: audit: type=1130 audit(1719903484.937:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.85:22-10.0.0.1:48478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:04.978000 audit[3628]: USER_ACCT pid=3628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:04.978832 sshd[3628]: Accepted publickey for core from 10.0.0.1 port 48478 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:04.979818 sshd[3628]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:04.979000 audit[3628]: CRED_ACQ pid=3628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:04.983418 systemd-logind[1375]: New session 9 of user core. Jul 2 06:58:04.986084 kernel: audit: type=1101 audit(1719903484.978:277): pid=3628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:04.986140 kernel: audit: type=1103 audit(1719903484.979:278): pid=3628 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:04.986162 kernel: audit: type=1006 audit(1719903484.979:279): pid=3628 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jul 2 06:58:04.979000 audit[3628]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbc4b88e0 a2=3 a3=7fcb8b703480 items=0 ppid=1 pid=3628 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:04.992014 kernel: audit: type=1300 audit(1719903484.979:279): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbc4b88e0 a2=3 a3=7fcb8b703480 items=0 ppid=1 pid=3628 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:04.992059 kernel: audit: type=1327 audit(1719903484.979:279): proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:04.979000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:05.004585 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 06:58:05.009000 audit[3628]: USER_START pid=3628 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:05.009000 audit[3631]: CRED_ACQ pid=3631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:05.016536 kernel: audit: type=1105 audit(1719903485.009:280): pid=3628 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:05.016607 kernel: audit: type=1103 audit(1719903485.009:281): pid=3631 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:05.123350 sshd[3628]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:05.123000 audit[3628]: USER_END pid=3628 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:05.125861 systemd[1]: sshd@8-10.0.0.85:22-10.0.0.1:48478.service: Deactivated successfully. Jul 2 06:58:05.126906 systemd-logind[1375]: Session 9 logged out. Waiting for processes to exit. Jul 2 06:58:05.126920 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 06:58:05.127955 systemd-logind[1375]: Removed session 9. Jul 2 06:58:05.124000 audit[3628]: CRED_DISP pid=3628 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:05.132690 kernel: audit: type=1106 audit(1719903485.123:282): pid=3628 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:05.132756 kernel: audit: type=1104 audit(1719903485.124:283): pid=3628 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:05.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.85:22-10.0.0.1:48478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:07.720942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472059942.mount: Deactivated successfully. Jul 2 06:58:08.228877 containerd[1393]: time="2024-07-02T06:58:08.228785158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:08.230291 containerd[1393]: time="2024-07-02T06:58:08.230230142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 06:58:08.231572 containerd[1393]: time="2024-07-02T06:58:08.231535605Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:08.233395 containerd[1393]: time="2024-07-02T06:58:08.233338652Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:08.234778 containerd[1393]: time="2024-07-02T06:58:08.234754120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:08.235333 containerd[1393]: time="2024-07-02T06:58:08.235287433Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 4.48227169s" Jul 2 06:58:08.235419 containerd[1393]: time="2024-07-02T06:58:08.235330503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 06:58:08.242936 containerd[1393]: time="2024-07-02T06:58:08.242894772Z" level=info msg="CreateContainer within sandbox \"7d57be58ea67766b84fcd5157e22e253599e6132a3c4cfa7360d50cb8cee6b1d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 06:58:08.260396 containerd[1393]: time="2024-07-02T06:58:08.260339400Z" level=info msg="CreateContainer within sandbox \"7d57be58ea67766b84fcd5157e22e253599e6132a3c4cfa7360d50cb8cee6b1d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b0e9c0b6e26c0b74cc8eb12c2b61acc4385a393258fb98eccefb5d5c7ec407a0\"" Jul 2 06:58:08.260976 containerd[1393]: time="2024-07-02T06:58:08.260947992Z" level=info msg="StartContainer for \"b0e9c0b6e26c0b74cc8eb12c2b61acc4385a393258fb98eccefb5d5c7ec407a0\"" Jul 2 06:58:08.349055 containerd[1393]: time="2024-07-02T06:58:08.348996130Z" level=info msg="StartContainer for \"b0e9c0b6e26c0b74cc8eb12c2b61acc4385a393258fb98eccefb5d5c7ec407a0\" returns successfully" Jul 2 06:58:08.415396 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 06:58:08.415520 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 06:58:08.771284 kubelet[2418]: E0702 06:58:08.771250 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:08.781428 kubelet[2418]: I0702 06:58:08.781394 2418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-2sfgk" podStartSLOduration=4.009961438 podCreationTimestamp="2024-07-02 06:57:54 +0000 UTC" firstStartedPulling="2024-07-02 06:57:57.464188679 +0000 UTC m=+28.002406736" lastFinishedPulling="2024-07-02 06:58:08.235572999 +0000 UTC m=+38.773791056" observedRunningTime="2024-07-02 06:58:08.780839476 +0000 UTC m=+39.319057523" watchObservedRunningTime="2024-07-02 06:58:08.781345758 +0000 UTC m=+39.319563815" Jul 2 06:58:09.623000 audit[3752]: AVC avc: denied { write } for pid=3752 comm="tee" name="fd" dev="proc" ino=26760 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:58:09.623000 audit[3752]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc3947ba23 a2=241 a3=1b6 items=1 ppid=3732 pid=3752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.623000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 2 06:58:09.623000 audit: PATH item=0 name="/dev/fd/63" inode=26754 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:58:09.623000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:58:09.625000 audit[3758]: AVC avc: denied { write } for pid=3758 comm="tee" name="fd" dev="proc" ino=26764 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:58:09.625000 audit[3758]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffb99eda32 a2=241 a3=1b6 items=1 ppid=3736 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.625000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 2 06:58:09.625000 audit: PATH item=0 name="/dev/fd/63" inode=26757 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:58:09.625000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:58:09.631000 audit[3773]: AVC avc: denied { write } for pid=3773 comm="tee" name="fd" dev="proc" ino=26776 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:58:09.631000 audit[3773]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff66f8ea32 a2=241 a3=1b6 items=1 ppid=3723 pid=3773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.631000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 2 06:58:09.631000 audit: PATH item=0 name="/dev/fd/63" inode=26770 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:58:09.631000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:58:09.633000 audit[3775]: AVC avc: denied { write } for pid=3775 comm="tee" name="fd" dev="proc" ino=26780 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:58:09.633000 audit[3775]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc727eea22 a2=241 a3=1b6 items=1 ppid=3730 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.633000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 2 06:58:09.633000 audit: PATH item=0 name="/dev/fd/63" inode=26773 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:58:09.633000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:58:09.661000 audit[3798]: AVC avc: denied { write } for pid=3798 comm="tee" name="fd" dev="proc" ino=24159 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:58:09.661000 audit[3798]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffca6995a33 a2=241 a3=1b6 items=1 ppid=3728 pid=3798 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.661000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 2 06:58:09.661000 audit: PATH item=0 name="/dev/fd/63" inode=25172 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:58:09.661000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:58:09.663000 audit[3800]: AVC avc: denied { write } for pid=3800 comm="tee" name="fd" dev="proc" ino=25181 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:58:09.663000 audit[3800]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd98d4ba32 a2=241 a3=1b6 items=1 ppid=3725 pid=3800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.663000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 2 06:58:09.663000 audit: PATH item=0 name="/dev/fd/63" inode=25175 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:58:09.663000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:58:09.670000 audit[3805]: AVC avc: denied { write } for pid=3805 comm="tee" name="fd" dev="proc" ino=25931 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:58:09.670000 audit[3805]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc04586a34 a2=241 a3=1b6 items=1 ppid=3722 pid=3805 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.670000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 2 06:58:09.670000 audit: PATH item=0 name="/dev/fd/63" inode=25178 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:58:09.670000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:58:09.842342 systemd-networkd[1177]: vxlan.calico: Link UP Jul 2 06:58:09.842349 systemd-networkd[1177]: vxlan.calico: Gained carrier Jul 2 06:58:09.858000 audit: BPF prog-id=10 op=LOAD Jul 2 06:58:09.858000 audit[3868]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc1ec6aee0 a2=70 a3=7f7ce3cb6000 items=0 ppid=3726 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 06:58:09.858000 audit: BPF prog-id=10 op=UNLOAD Jul 2 06:58:09.858000 audit: BPF prog-id=11 op=LOAD Jul 2 06:58:09.858000 audit[3868]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc1ec6aee0 a2=70 a3=6f items=0 ppid=3726 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 06:58:09.858000 audit: BPF prog-id=11 op=UNLOAD Jul 2 06:58:09.858000 audit: BPF prog-id=12 op=LOAD Jul 2 06:58:09.858000 audit[3868]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc1ec6ae70 a2=70 a3=7ffc1ec6aee0 items=0 ppid=3726 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.858000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 06:58:09.858000 audit: BPF prog-id=12 op=UNLOAD Jul 2 06:58:09.859000 audit: BPF prog-id=13 op=LOAD Jul 2 06:58:09.859000 audit[3868]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc1ec6aea0 a2=70 a3=0 items=0 ppid=3726 pid=3868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.859000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 06:58:09.877000 audit: BPF prog-id=13 op=UNLOAD Jul 2 06:58:09.929000 audit[3900]: NETFILTER_CFG table=raw:97 family=2 entries=19 op=nft_register_chain pid=3900 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:58:09.929000 audit[3900]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffe399b6c40 a2=0 a3=7ffe399b6c2c items=0 ppid=3726 pid=3900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.929000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:58:09.931000 audit[3903]: NETFILTER_CFG table=mangle:98 family=2 entries=16 op=nft_register_chain pid=3903 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:58:09.931000 audit[3903]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff11164070 a2=0 a3=7fff1116405c items=0 ppid=3726 pid=3903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.931000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:58:09.938000 audit[3902]: NETFILTER_CFG table=nat:99 family=2 entries=15 op=nft_register_chain pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:58:09.940275 kernel: kauditd_printk_skb: 58 callbacks suppressed Jul 2 06:58:09.940322 kernel: audit: type=1325 audit(1719903489.938:302): table=nat:99 family=2 entries=15 op=nft_register_chain pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:58:09.938000 audit[3902]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe7ae216e0 a2=0 a3=7ffe7ae216cc items=0 ppid=3726 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.950420 kernel: audit: type=1300 audit(1719903489.938:302): arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe7ae216e0 a2=0 a3=7ffe7ae216cc items=0 ppid=3726 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.950574 kernel: audit: type=1327 audit(1719903489.938:302): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:58:09.938000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:58:09.940000 audit[3901]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=3901 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:58:09.955781 kernel: audit: type=1325 audit(1719903489.940:303): table=filter:100 family=2 entries=39 op=nft_register_chain pid=3901 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:58:09.955947 kernel: audit: type=1300 audit(1719903489.940:303): arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffeb9817690 a2=0 a3=7ffeb981767c items=0 ppid=3726 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.940000 audit[3901]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffeb9817690 a2=0 a3=7ffeb981767c items=0 ppid=3726 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:09.940000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:58:09.964136 kernel: audit: type=1327 audit(1719903489.940:303): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:58:10.132719 systemd[1]: Started sshd@9-10.0.0.85:22-10.0.0.1:48488.service - OpenSSH per-connection server daemon (10.0.0.1:48488). Jul 2 06:58:10.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:48488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:10.137431 kernel: audit: type=1130 audit(1719903490.132:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:48488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:10.158000 audit[3909]: USER_ACCT pid=3909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:10.159332 sshd[3909]: Accepted publickey for core from 10.0.0.1 port 48488 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:10.160482 sshd[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:10.159000 audit[3909]: CRED_ACQ pid=3909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:10.164436 systemd-logind[1375]: New session 10 of user core. Jul 2 06:58:10.166611 kernel: audit: type=1101 audit(1719903490.158:305): pid=3909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:10.166665 kernel: audit: type=1103 audit(1719903490.159:306): pid=3909 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:10.166682 kernel: audit: type=1006 audit(1719903490.159:307): pid=3909 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 2 06:58:10.159000 audit[3909]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda1438650 a2=3 a3=7f54dad7c480 items=0 ppid=1 pid=3909 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:10.159000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:10.174694 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 06:58:10.179000 audit[3909]: USER_START pid=3909 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:10.181000 audit[3912]: CRED_ACQ pid=3912 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:10.296550 sshd[3909]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:10.297000 audit[3909]: USER_END pid=3909 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:10.297000 audit[3909]: CRED_DISP pid=3909 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:10.299386 systemd[1]: sshd@9-10.0.0.85:22-10.0.0.1:48488.service: Deactivated successfully. Jul 2 06:58:10.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.85:22-10.0.0.1:48488 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:10.300555 systemd-logind[1375]: Session 10 logged out. Waiting for processes to exit. Jul 2 06:58:10.300623 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 06:58:10.301363 systemd-logind[1375]: Removed session 10. Jul 2 06:58:11.203526 systemd-networkd[1177]: vxlan.calico: Gained IPv6LL Jul 2 06:58:15.310681 systemd[1]: Started sshd@10-10.0.0.85:22-10.0.0.1:44898.service - OpenSSH per-connection server daemon (10.0.0.1:44898). Jul 2 06:58:15.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.85:22-10.0.0.1:44898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:15.311559 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 2 06:58:15.311614 kernel: audit: type=1130 audit(1719903495.310:313): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.85:22-10.0.0.1:44898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:15.336000 audit[3938]: USER_ACCT pid=3938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.337428 sshd[3938]: Accepted publickey for core from 10.0.0.1 port 44898 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:15.338418 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:15.337000 audit[3938]: CRED_ACQ pid=3938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.342089 systemd-logind[1375]: New session 11 of user core. Jul 2 06:58:15.356844 kernel: audit: type=1101 audit(1719903495.336:314): pid=3938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.356902 kernel: audit: type=1103 audit(1719903495.337:315): pid=3938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.356922 kernel: audit: type=1006 audit(1719903495.337:316): pid=3938 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jul 2 06:58:15.337000 audit[3938]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd92726410 a2=3 a3=7f6bbc497480 items=0 ppid=1 pid=3938 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:15.364010 kernel: audit: type=1300 audit(1719903495.337:316): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd92726410 a2=3 a3=7f6bbc497480 items=0 ppid=1 pid=3938 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:15.364054 kernel: audit: type=1327 audit(1719903495.337:316): proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:15.337000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:15.375847 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 06:58:15.380000 audit[3938]: USER_START pid=3938 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.380000 audit[3941]: CRED_ACQ pid=3941 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.388807 kernel: audit: type=1105 audit(1719903495.380:317): pid=3938 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.388942 kernel: audit: type=1103 audit(1719903495.380:318): pid=3941 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.527148 sshd[3938]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:15.527000 audit[3938]: USER_END pid=3938 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.527000 audit[3938]: CRED_DISP pid=3938 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.544557 kernel: audit: type=1106 audit(1719903495.527:319): pid=3938 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.544619 kernel: audit: type=1104 audit(1719903495.527:320): pid=3938 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.549837 containerd[1393]: time="2024-07-02T06:58:15.549779617Z" level=info msg="StopPodSandbox for \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\"" Jul 2 06:58:15.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.85:22-10.0.0.1:44912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:15.551870 systemd[1]: Started sshd@11-10.0.0.85:22-10.0.0.1:44912.service - OpenSSH per-connection server daemon (10.0.0.1:44912). Jul 2 06:58:15.552702 systemd[1]: sshd@10-10.0.0.85:22-10.0.0.1:44898.service: Deactivated successfully. Jul 2 06:58:15.553780 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 06:58:15.554749 systemd-logind[1375]: Session 11 logged out. Waiting for processes to exit. Jul 2 06:58:15.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.85:22-10.0.0.1:44898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:15.555646 systemd-logind[1375]: Removed session 11. Jul 2 06:58:15.578000 audit[3951]: USER_ACCT pid=3951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.579514 sshd[3951]: Accepted publickey for core from 10.0.0.1 port 44912 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:15.580000 audit[3951]: CRED_ACQ pid=3951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.580000 audit[3951]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdde068700 a2=3 a3=7f6e1845a480 items=0 ppid=1 pid=3951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:15.580000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:15.580742 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:15.584706 systemd-logind[1375]: New session 12 of user core. Jul 2 06:58:15.588611 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 06:58:15.593000 audit[3951]: USER_START pid=3951 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.595000 audit[3979]: CRED_ACQ pid=3979 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.800879 containerd[1393]: 2024-07-02 06:58:15.672 [INFO][3970] k8s.go 608: Cleaning up netns ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:15.800879 containerd[1393]: 2024-07-02 06:58:15.672 [INFO][3970] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" iface="eth0" netns="/var/run/netns/cni-c0c9b03b-b5d0-1f2b-1a2d-a40f5762d704" Jul 2 06:58:15.800879 containerd[1393]: 2024-07-02 06:58:15.673 [INFO][3970] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" iface="eth0" netns="/var/run/netns/cni-c0c9b03b-b5d0-1f2b-1a2d-a40f5762d704" Jul 2 06:58:15.800879 containerd[1393]: 2024-07-02 06:58:15.673 [INFO][3970] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" iface="eth0" netns="/var/run/netns/cni-c0c9b03b-b5d0-1f2b-1a2d-a40f5762d704" Jul 2 06:58:15.800879 containerd[1393]: 2024-07-02 06:58:15.673 [INFO][3970] k8s.go 615: Releasing IP address(es) ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:15.800879 containerd[1393]: 2024-07-02 06:58:15.673 [INFO][3970] utils.go 188: Calico CNI releasing IP address ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:15.800879 containerd[1393]: 2024-07-02 06:58:15.723 [INFO][3986] ipam_plugin.go 411: Releasing address using handleID ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" HandleID="k8s-pod-network.ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Workload="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:15.800879 containerd[1393]: 2024-07-02 06:58:15.724 [INFO][3986] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:15.800879 containerd[1393]: 2024-07-02 06:58:15.724 [INFO][3986] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:15.800879 containerd[1393]: 2024-07-02 06:58:15.795 [WARNING][3986] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" HandleID="k8s-pod-network.ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Workload="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:15.800879 containerd[1393]: 2024-07-02 06:58:15.795 [INFO][3986] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" HandleID="k8s-pod-network.ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Workload="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:15.800879 containerd[1393]: 2024-07-02 06:58:15.798 [INFO][3986] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:15.800879 containerd[1393]: 2024-07-02 06:58:15.799 [INFO][3970] k8s.go 621: Teardown processing complete. ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:15.803891 systemd[1]: run-netns-cni\x2dc0c9b03b\x2db5d0\x2d1f2b\x2d1a2d\x2da40f5762d704.mount: Deactivated successfully. Jul 2 06:58:15.804758 containerd[1393]: time="2024-07-02T06:58:15.804723668Z" level=info msg="TearDown network for sandbox \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\" successfully" Jul 2 06:58:15.804832 containerd[1393]: time="2024-07-02T06:58:15.804819338Z" level=info msg="StopPodSandbox for \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\" returns successfully" Jul 2 06:58:15.805587 containerd[1393]: time="2024-07-02T06:58:15.805566851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hth2l,Uid:3da56065-eacb-45a3-bb8d-c1271ca90971,Namespace:calico-system,Attempt:1,}" Jul 2 06:58:15.989698 sshd[3951]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:15.993000 audit[3951]: USER_END pid=3951 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.994000 audit[3951]: CRED_DISP pid=3951 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:15.997850 systemd[1]: Started sshd@12-10.0.0.85:22-10.0.0.1:44924.service - OpenSSH per-connection server daemon (10.0.0.1:44924). Jul 2 06:58:15.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.85:22-10.0.0.1:44924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:15.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.85:22-10.0.0.1:44912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:15.998648 systemd[1]: sshd@11-10.0.0.85:22-10.0.0.1:44912.service: Deactivated successfully. Jul 2 06:58:16.000409 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 06:58:16.001263 systemd-logind[1375]: Session 12 logged out. Waiting for processes to exit. Jul 2 06:58:16.006264 systemd-logind[1375]: Removed session 12. Jul 2 06:58:16.040000 audit[4008]: USER_ACCT pid=4008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:16.041568 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 44924 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:16.042000 audit[4008]: CRED_ACQ pid=4008 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:16.042000 audit[4008]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6feb5b40 a2=3 a3=7f575dff4480 items=0 ppid=1 pid=4008 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:16.042000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:16.043510 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:16.048895 systemd-logind[1375]: New session 13 of user core. Jul 2 06:58:16.054707 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 06:58:16.061000 audit[4008]: USER_START pid=4008 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:16.062000 audit[4020]: CRED_ACQ pid=4020 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:16.084285 systemd-networkd[1177]: calib6e094e7685: Link UP Jul 2 06:58:16.086630 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 06:58:16.086712 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib6e094e7685: link becomes ready Jul 2 06:58:16.086830 systemd-networkd[1177]: calib6e094e7685: Gained carrier Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:15.979 [INFO][3995] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--hth2l-eth0 csi-node-driver- calico-system 3da56065-eacb-45a3-bb8d-c1271ca90971 831 0 2024-07-02 06:57:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-hth2l eth0 default [] [] [kns.calico-system ksa.calico-system.default] calib6e094e7685 [] []}} ContainerID="9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" Namespace="calico-system" Pod="csi-node-driver-hth2l" WorkloadEndpoint="localhost-k8s-csi--node--driver--hth2l-" Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:15.979 [INFO][3995] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" Namespace="calico-system" Pod="csi-node-driver-hth2l" WorkloadEndpoint="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.036 [INFO][4009] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" HandleID="k8s-pod-network.9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" Workload="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.051 [INFO][4009] ipam_plugin.go 264: Auto assigning IP ContainerID="9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" HandleID="k8s-pod-network.9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" Workload="localhost-k8s-csi--node--driver--hth2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036c270), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-hth2l", "timestamp":"2024-07-02 06:58:16.036415735 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.051 [INFO][4009] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.051 [INFO][4009] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.051 [INFO][4009] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.054 [INFO][4009] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" host="localhost" Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.060 [INFO][4009] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.065 [INFO][4009] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.067 [INFO][4009] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.070 [INFO][4009] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.070 [INFO][4009] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" host="localhost" Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.071 [INFO][4009] ipam.go 1685: Creating new handle: k8s-pod-network.9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65 Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.075 [INFO][4009] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" host="localhost" Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.079 [INFO][4009] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" host="localhost" Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.079 [INFO][4009] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" host="localhost" Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.079 [INFO][4009] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:16.102317 containerd[1393]: 2024-07-02 06:58:16.079 [INFO][4009] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" HandleID="k8s-pod-network.9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" Workload="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:16.103160 containerd[1393]: 2024-07-02 06:58:16.081 [INFO][3995] k8s.go 386: Populated endpoint ContainerID="9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" Namespace="calico-system" Pod="csi-node-driver-hth2l" WorkloadEndpoint="localhost-k8s-csi--node--driver--hth2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hth2l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3da56065-eacb-45a3-bb8d-c1271ca90971", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-hth2l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib6e094e7685", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:16.103160 containerd[1393]: 2024-07-02 06:58:16.082 [INFO][3995] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" Namespace="calico-system" Pod="csi-node-driver-hth2l" WorkloadEndpoint="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:16.103160 containerd[1393]: 2024-07-02 06:58:16.082 [INFO][3995] dataplane_linux.go 68: Setting the host side veth name to calib6e094e7685 ContainerID="9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" Namespace="calico-system" Pod="csi-node-driver-hth2l" WorkloadEndpoint="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:16.103160 containerd[1393]: 2024-07-02 06:58:16.086 [INFO][3995] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" Namespace="calico-system" Pod="csi-node-driver-hth2l" WorkloadEndpoint="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:16.103160 containerd[1393]: 2024-07-02 06:58:16.087 [INFO][3995] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" Namespace="calico-system" Pod="csi-node-driver-hth2l" WorkloadEndpoint="localhost-k8s-csi--node--driver--hth2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hth2l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3da56065-eacb-45a3-bb8d-c1271ca90971", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65", Pod:"csi-node-driver-hth2l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib6e094e7685", MAC:"ba:03:4b:e5:d7:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:16.103160 containerd[1393]: 2024-07-02 06:58:16.099 [INFO][3995] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65" Namespace="calico-system" Pod="csi-node-driver-hth2l" WorkloadEndpoint="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:16.110000 audit[4038]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=4038 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:58:16.110000 audit[4038]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffce8bb0680 a2=0 a3=7ffce8bb066c items=0 ppid=3726 pid=4038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:16.110000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:58:16.163817 containerd[1393]: time="2024-07-02T06:58:16.163738500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:58:16.164134 containerd[1393]: time="2024-07-02T06:58:16.164081764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:58:16.164134 containerd[1393]: time="2024-07-02T06:58:16.164100960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:58:16.164134 containerd[1393]: time="2024-07-02T06:58:16.164110438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:58:16.190594 sshd[4008]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:16.191000 audit[4008]: USER_END pid=4008 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:16.191669 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 06:58:16.191000 audit[4008]: CRED_DISP pid=4008 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:16.193902 systemd[1]: sshd@12-10.0.0.85:22-10.0.0.1:44924.service: Deactivated successfully. Jul 2 06:58:16.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.85:22-10.0.0.1:44924 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:16.195178 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 06:58:16.195926 systemd-logind[1375]: Session 13 logged out. Waiting for processes to exit. Jul 2 06:58:16.196764 systemd-logind[1375]: Removed session 13. Jul 2 06:58:16.205284 containerd[1393]: time="2024-07-02T06:58:16.205247940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hth2l,Uid:3da56065-eacb-45a3-bb8d-c1271ca90971,Namespace:calico-system,Attempt:1,} returns sandbox id \"9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65\"" Jul 2 06:58:16.206947 containerd[1393]: time="2024-07-02T06:58:16.206914449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 06:58:16.549362 containerd[1393]: time="2024-07-02T06:58:16.549287337Z" level=info msg="StopPodSandbox for \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\"" Jul 2 06:58:16.636613 containerd[1393]: 2024-07-02 06:58:16.590 [INFO][4106] k8s.go 608: Cleaning up netns ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:16.636613 containerd[1393]: 2024-07-02 06:58:16.590 [INFO][4106] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" iface="eth0" netns="/var/run/netns/cni-57a709ab-0aa4-66a9-39db-bc5dbf3071d9" Jul 2 06:58:16.636613 containerd[1393]: 2024-07-02 06:58:16.591 [INFO][4106] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" iface="eth0" netns="/var/run/netns/cni-57a709ab-0aa4-66a9-39db-bc5dbf3071d9" Jul 2 06:58:16.636613 containerd[1393]: 2024-07-02 06:58:16.591 [INFO][4106] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" iface="eth0" netns="/var/run/netns/cni-57a709ab-0aa4-66a9-39db-bc5dbf3071d9" Jul 2 06:58:16.636613 containerd[1393]: 2024-07-02 06:58:16.591 [INFO][4106] k8s.go 615: Releasing IP address(es) ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:16.636613 containerd[1393]: 2024-07-02 06:58:16.591 [INFO][4106] utils.go 188: Calico CNI releasing IP address ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:16.636613 containerd[1393]: 2024-07-02 06:58:16.609 [INFO][4113] ipam_plugin.go 411: Releasing address using handleID ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" HandleID="k8s-pod-network.12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Workload="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:16.636613 containerd[1393]: 2024-07-02 06:58:16.609 [INFO][4113] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:16.636613 containerd[1393]: 2024-07-02 06:58:16.609 [INFO][4113] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:16.636613 containerd[1393]: 2024-07-02 06:58:16.632 [WARNING][4113] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" HandleID="k8s-pod-network.12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Workload="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:16.636613 containerd[1393]: 2024-07-02 06:58:16.632 [INFO][4113] ipam_plugin.go 439: Releasing address using workloadID ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" HandleID="k8s-pod-network.12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Workload="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:16.636613 containerd[1393]: 2024-07-02 06:58:16.634 [INFO][4113] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:16.636613 containerd[1393]: 2024-07-02 06:58:16.635 [INFO][4106] k8s.go 621: Teardown processing complete. ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:16.637431 containerd[1393]: time="2024-07-02T06:58:16.636756070Z" level=info msg="TearDown network for sandbox \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\" successfully" Jul 2 06:58:16.637431 containerd[1393]: time="2024-07-02T06:58:16.636785495Z" level=info msg="StopPodSandbox for \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\" returns successfully" Jul 2 06:58:16.637498 kubelet[2418]: E0702 06:58:16.637076 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:16.637756 containerd[1393]: time="2024-07-02T06:58:16.637448320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-22sns,Uid:9c333969-8c44-45e3-a4bd-3452f33a72a4,Namespace:kube-system,Attempt:1,}" Jul 2 06:58:16.771791 systemd-networkd[1177]: cali3f8bf3f0a98: Link UP Jul 2 06:58:16.773010 systemd-networkd[1177]: cali3f8bf3f0a98: Gained carrier Jul 2 06:58:16.773512 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3f8bf3f0a98: link becomes ready Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.717 [INFO][4122] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--22sns-eth0 coredns-5dd5756b68- kube-system 9c333969-8c44-45e3-a4bd-3452f33a72a4 854 0 2024-07-02 06:57:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-22sns eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f8bf3f0a98 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" Namespace="kube-system" Pod="coredns-5dd5756b68-22sns" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--22sns-" Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.717 [INFO][4122] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" Namespace="kube-system" Pod="coredns-5dd5756b68-22sns" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.740 [INFO][4135] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" HandleID="k8s-pod-network.0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" Workload="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.748 [INFO][4135] ipam_plugin.go 264: Auto assigning IP ContainerID="0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" HandleID="k8s-pod-network.0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" Workload="localhost-k8s-coredns--5dd5756b68--22sns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003660a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-22sns", "timestamp":"2024-07-02 06:58:16.740310629 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.748 [INFO][4135] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.748 [INFO][4135] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.748 [INFO][4135] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.750 [INFO][4135] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" host="localhost" Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.754 [INFO][4135] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.757 [INFO][4135] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.759 [INFO][4135] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.760 [INFO][4135] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.760 [INFO][4135] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" host="localhost" Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.761 [INFO][4135] ipam.go 1685: Creating new handle: k8s-pod-network.0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.764 [INFO][4135] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" host="localhost" Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.768 [INFO][4135] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" host="localhost" Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.768 [INFO][4135] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" host="localhost" Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.768 [INFO][4135] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:16.782501 containerd[1393]: 2024-07-02 06:58:16.768 [INFO][4135] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" HandleID="k8s-pod-network.0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" Workload="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:16.783276 containerd[1393]: 2024-07-02 06:58:16.770 [INFO][4122] k8s.go 386: Populated endpoint ContainerID="0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" Namespace="kube-system" Pod="coredns-5dd5756b68-22sns" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--22sns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--22sns-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9c333969-8c44-45e3-a4bd-3452f33a72a4", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-22sns", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f8bf3f0a98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:16.783276 containerd[1393]: 2024-07-02 06:58:16.770 [INFO][4122] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" Namespace="kube-system" Pod="coredns-5dd5756b68-22sns" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:16.783276 containerd[1393]: 2024-07-02 06:58:16.770 [INFO][4122] dataplane_linux.go 68: Setting the host side veth name to cali3f8bf3f0a98 ContainerID="0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" Namespace="kube-system" Pod="coredns-5dd5756b68-22sns" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:16.783276 containerd[1393]: 2024-07-02 06:58:16.772 [INFO][4122] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" Namespace="kube-system" Pod="coredns-5dd5756b68-22sns" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:16.783276 containerd[1393]: 2024-07-02 06:58:16.773 [INFO][4122] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" Namespace="kube-system" Pod="coredns-5dd5756b68-22sns" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--22sns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--22sns-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9c333969-8c44-45e3-a4bd-3452f33a72a4", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a", Pod:"coredns-5dd5756b68-22sns", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f8bf3f0a98", MAC:"66:a6:39:e3:29:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:16.783276 containerd[1393]: 2024-07-02 06:58:16.780 [INFO][4122] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a" Namespace="kube-system" Pod="coredns-5dd5756b68-22sns" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:16.792000 audit[4157]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=4157 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:58:16.792000 audit[4157]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffc3b72d1f0 a2=0 a3=7ffc3b72d1dc items=0 ppid=3726 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:16.792000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:58:16.804913 systemd[1]: run-netns-cni\x2d57a709ab\x2d0aa4\x2d66a9\x2d39db\x2dbc5dbf3071d9.mount: Deactivated successfully. Jul 2 06:58:16.807178 containerd[1393]: time="2024-07-02T06:58:16.806879109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:58:16.807178 containerd[1393]: time="2024-07-02T06:58:16.806987862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:58:16.807178 containerd[1393]: time="2024-07-02T06:58:16.807019983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:58:16.807178 containerd[1393]: time="2024-07-02T06:58:16.807034139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:58:16.834066 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 06:58:16.858476 containerd[1393]: time="2024-07-02T06:58:16.858425690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-22sns,Uid:9c333969-8c44-45e3-a4bd-3452f33a72a4,Namespace:kube-system,Attempt:1,} returns sandbox id \"0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a\"" Jul 2 06:58:16.859124 kubelet[2418]: E0702 06:58:16.859098 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:16.861195 containerd[1393]: time="2024-07-02T06:58:16.861145265Z" level=info msg="CreateContainer within sandbox \"0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 06:58:16.881606 containerd[1393]: time="2024-07-02T06:58:16.881556400Z" level=info msg="CreateContainer within sandbox \"0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b400e56b7cba3a5944036c2e9ee0070161a24bffceeb1a36482d3be6b769fea\"" Jul 2 06:58:16.882095 containerd[1393]: time="2024-07-02T06:58:16.882066888Z" level=info msg="StartContainer for \"9b400e56b7cba3a5944036c2e9ee0070161a24bffceeb1a36482d3be6b769fea\"" Jul 2 06:58:16.925442 containerd[1393]: time="2024-07-02T06:58:16.925332705Z" level=info msg="StartContainer for \"9b400e56b7cba3a5944036c2e9ee0070161a24bffceeb1a36482d3be6b769fea\" returns successfully" Jul 2 06:58:17.553729 containerd[1393]: time="2024-07-02T06:58:17.553669662Z" level=info msg="StopPodSandbox for \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\"" Jul 2 06:58:17.553943 containerd[1393]: time="2024-07-02T06:58:17.553757617Z" level=info msg="StopPodSandbox for \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\"" Jul 2 06:58:17.605025 systemd-networkd[1177]: calib6e094e7685: Gained IPv6LL Jul 2 06:58:17.798923 kubelet[2418]: E0702 06:58:17.798724 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:17.805256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2284760433.mount: Deactivated successfully. Jul 2 06:58:17.925271 kubelet[2418]: I0702 06:58:17.924975 2418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-22sns" podStartSLOduration=35.924903967 podCreationTimestamp="2024-07-02 06:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:58:17.923629585 +0000 UTC m=+48.461847642" watchObservedRunningTime="2024-07-02 06:58:17.924903967 +0000 UTC m=+48.463122024" Jul 2 06:58:17.927275 containerd[1393]: 2024-07-02 06:58:17.621 [INFO][4271] k8s.go 608: Cleaning up netns ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:17.927275 containerd[1393]: 2024-07-02 06:58:17.621 [INFO][4271] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" iface="eth0" netns="/var/run/netns/cni-aa5e769c-754e-c9ed-f6e5-5ce95ef11fd8" Jul 2 06:58:17.927275 containerd[1393]: 2024-07-02 06:58:17.622 [INFO][4271] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" iface="eth0" netns="/var/run/netns/cni-aa5e769c-754e-c9ed-f6e5-5ce95ef11fd8" Jul 2 06:58:17.927275 containerd[1393]: 2024-07-02 06:58:17.622 [INFO][4271] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" iface="eth0" netns="/var/run/netns/cni-aa5e769c-754e-c9ed-f6e5-5ce95ef11fd8" Jul 2 06:58:17.927275 containerd[1393]: 2024-07-02 06:58:17.622 [INFO][4271] k8s.go 615: Releasing IP address(es) ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:17.927275 containerd[1393]: 2024-07-02 06:58:17.622 [INFO][4271] utils.go 188: Calico CNI releasing IP address ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:17.927275 containerd[1393]: 2024-07-02 06:58:17.639 [INFO][4289] ipam_plugin.go 411: Releasing address using handleID ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" HandleID="k8s-pod-network.3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Workload="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:17.927275 containerd[1393]: 2024-07-02 06:58:17.639 [INFO][4289] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:17.927275 containerd[1393]: 2024-07-02 06:58:17.639 [INFO][4289] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:17.927275 containerd[1393]: 2024-07-02 06:58:17.830 [WARNING][4289] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" HandleID="k8s-pod-network.3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Workload="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:17.927275 containerd[1393]: 2024-07-02 06:58:17.830 [INFO][4289] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" HandleID="k8s-pod-network.3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Workload="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:17.927275 containerd[1393]: 2024-07-02 06:58:17.922 [INFO][4289] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:17.927275 containerd[1393]: 2024-07-02 06:58:17.925 [INFO][4271] k8s.go 621: Teardown processing complete. ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:17.929672 systemd[1]: run-netns-cni\x2daa5e769c\x2d754e\x2dc9ed\x2df6e5\x2d5ce95ef11fd8.mount: Deactivated successfully. Jul 2 06:58:17.930227 containerd[1393]: time="2024-07-02T06:58:17.930170202Z" level=info msg="TearDown network for sandbox \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\" successfully" Jul 2 06:58:17.930227 containerd[1393]: time="2024-07-02T06:58:17.930217681Z" level=info msg="StopPodSandbox for \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\" returns successfully" Jul 2 06:58:17.930912 containerd[1393]: time="2024-07-02T06:58:17.930878631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d8f857889-tt9xj,Uid:9c276f6d-ef93-47c1-b679-37a0af5e9a64,Namespace:calico-system,Attempt:1,}" Jul 2 06:58:17.973905 containerd[1393]: time="2024-07-02T06:58:17.973835974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:18.141917 containerd[1393]: time="2024-07-02T06:58:18.141822038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 06:58:18.181000 audit[4305]: NETFILTER_CFG table=filter:103 family=2 entries=14 op=nft_register_rule pid=4305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:18.181000 audit[4305]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffe47768bd0 a2=0 a3=7ffe47768bbc items=0 ppid=2608 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:18.181000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:18.182000 audit[4305]: NETFILTER_CFG table=nat:104 family=2 entries=14 op=nft_register_rule pid=4305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:18.182000 audit[4305]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe47768bd0 a2=0 a3=0 items=0 ppid=2608 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:18.182000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:18.289451 containerd[1393]: time="2024-07-02T06:58:18.289398490Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:18.435519 systemd-networkd[1177]: cali3f8bf3f0a98: Gained IPv6LL Jul 2 06:58:18.437326 containerd[1393]: time="2024-07-02T06:58:18.437288391Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:18.651498 containerd[1393]: 2024-07-02 06:58:17.689 [INFO][4270] k8s.go 608: Cleaning up netns ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:18.651498 containerd[1393]: 2024-07-02 06:58:17.690 [INFO][4270] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" iface="eth0" netns="/var/run/netns/cni-5a2a16af-1e4e-1af7-f200-48b71f66b85c" Jul 2 06:58:18.651498 containerd[1393]: 2024-07-02 06:58:17.690 [INFO][4270] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" iface="eth0" netns="/var/run/netns/cni-5a2a16af-1e4e-1af7-f200-48b71f66b85c" Jul 2 06:58:18.651498 containerd[1393]: 2024-07-02 06:58:17.690 [INFO][4270] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" iface="eth0" netns="/var/run/netns/cni-5a2a16af-1e4e-1af7-f200-48b71f66b85c" Jul 2 06:58:18.651498 containerd[1393]: 2024-07-02 06:58:17.690 [INFO][4270] k8s.go 615: Releasing IP address(es) ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:18.651498 containerd[1393]: 2024-07-02 06:58:17.690 [INFO][4270] utils.go 188: Calico CNI releasing IP address ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:18.651498 containerd[1393]: 2024-07-02 06:58:17.720 [INFO][4297] ipam_plugin.go 411: Releasing address using handleID ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" HandleID="k8s-pod-network.353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Workload="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:18.651498 containerd[1393]: 2024-07-02 06:58:17.720 [INFO][4297] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:18.651498 containerd[1393]: 2024-07-02 06:58:17.923 [INFO][4297] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:18.651498 containerd[1393]: 2024-07-02 06:58:18.062 [WARNING][4297] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" HandleID="k8s-pod-network.353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Workload="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:18.651498 containerd[1393]: 2024-07-02 06:58:18.063 [INFO][4297] ipam_plugin.go 439: Releasing address using workloadID ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" HandleID="k8s-pod-network.353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Workload="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:18.651498 containerd[1393]: 2024-07-02 06:58:18.648 [INFO][4297] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:18.651498 containerd[1393]: 2024-07-02 06:58:18.650 [INFO][4270] k8s.go 621: Teardown processing complete. ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:18.652200 containerd[1393]: time="2024-07-02T06:58:18.652163248Z" level=info msg="TearDown network for sandbox \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\" successfully" Jul 2 06:58:18.652287 containerd[1393]: time="2024-07-02T06:58:18.652270229Z" level=info msg="StopPodSandbox for \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\" returns successfully" Jul 2 06:58:18.653182 kubelet[2418]: E0702 06:58:18.652689 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:18.653512 containerd[1393]: time="2024-07-02T06:58:18.653491481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5tvlc,Uid:48ddd97c-85ff-49ef-8095-30d9677f14bd,Namespace:kube-system,Attempt:1,}" Jul 2 06:58:18.654648 systemd[1]: run-netns-cni\x2d5a2a16af\x2d1e4e\x2d1af7\x2df200\x2d48b71f66b85c.mount: Deactivated successfully. Jul 2 06:58:18.686000 audit[4307]: NETFILTER_CFG table=filter:105 family=2 entries=11 op=nft_register_rule pid=4307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:18.686000 audit[4307]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd1e03dd20 a2=0 a3=7ffd1e03dd0c items=0 ppid=2608 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:18.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:18.687000 audit[4307]: NETFILTER_CFG table=nat:106 family=2 entries=35 op=nft_register_chain pid=4307 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:18.687000 audit[4307]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd1e03dd20 a2=0 a3=7ffd1e03dd0c items=0 ppid=2608 pid=4307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:18.687000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:18.792282 containerd[1393]: time="2024-07-02T06:58:18.792215047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:18.792907 containerd[1393]: time="2024-07-02T06:58:18.792872030Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.585766383s" Jul 2 06:58:18.792907 containerd[1393]: time="2024-07-02T06:58:18.792905994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 06:58:18.794606 containerd[1393]: time="2024-07-02T06:58:18.794575207Z" level=info msg="CreateContainer within sandbox \"9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 06:58:18.799967 kubelet[2418]: E0702 06:58:18.799947 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:18.986222 containerd[1393]: time="2024-07-02T06:58:18.983799210Z" level=info msg="CreateContainer within sandbox \"9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"207469a10f0b7901774cfe33e55d49584d82219cac99318aa5482cf44795e371\"" Jul 2 06:58:18.986222 containerd[1393]: time="2024-07-02T06:58:18.984541132Z" level=info msg="StartContainer for \"207469a10f0b7901774cfe33e55d49584d82219cac99318aa5482cf44795e371\"" Jul 2 06:58:19.081606 containerd[1393]: time="2024-07-02T06:58:19.081557959Z" level=info msg="StartContainer for \"207469a10f0b7901774cfe33e55d49584d82219cac99318aa5482cf44795e371\" returns successfully" Jul 2 06:58:19.083311 containerd[1393]: time="2024-07-02T06:58:19.083281004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 06:58:19.090175 systemd-networkd[1177]: calia52cd0b37db: Link UP Jul 2 06:58:19.093983 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia52cd0b37db: link becomes ready Jul 2 06:58:19.093046 systemd-networkd[1177]: calia52cd0b37db: Gained carrier Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:18.981 [INFO][4311] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0 calico-kube-controllers-d8f857889- calico-system 9c276f6d-ef93-47c1-b679-37a0af5e9a64 867 0 2024-07-02 06:57:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d8f857889 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-d8f857889-tt9xj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia52cd0b37db [] []}} ContainerID="0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" Namespace="calico-system" Pod="calico-kube-controllers-d8f857889-tt9xj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-" Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:18.981 [INFO][4311] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" Namespace="calico-system" Pod="calico-kube-controllers-d8f857889-tt9xj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.023 [INFO][4336] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" HandleID="k8s-pod-network.0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" Workload="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.036 [INFO][4336] ipam_plugin.go 264: Auto assigning IP ContainerID="0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" HandleID="k8s-pod-network.0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" Workload="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000516a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-d8f857889-tt9xj", "timestamp":"2024-07-02 06:58:19.02307713 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.036 [INFO][4336] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.036 [INFO][4336] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.036 [INFO][4336] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.039 [INFO][4336] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" host="localhost" Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.044 [INFO][4336] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.049 [INFO][4336] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.051 [INFO][4336] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.054 [INFO][4336] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.054 [INFO][4336] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" host="localhost" Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.057 [INFO][4336] ipam.go 1685: Creating new handle: k8s-pod-network.0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.061 [INFO][4336] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" host="localhost" Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.080 [INFO][4336] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" host="localhost" Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.081 [INFO][4336] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" host="localhost" Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.081 [INFO][4336] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:19.104698 containerd[1393]: 2024-07-02 06:58:19.081 [INFO][4336] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" HandleID="k8s-pod-network.0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" Workload="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:19.105219 containerd[1393]: 2024-07-02 06:58:19.085 [INFO][4311] k8s.go 386: Populated endpoint ContainerID="0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" Namespace="calico-system" Pod="calico-kube-controllers-d8f857889-tt9xj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0", GenerateName:"calico-kube-controllers-d8f857889-", Namespace:"calico-system", SelfLink:"", UID:"9c276f6d-ef93-47c1-b679-37a0af5e9a64", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d8f857889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-d8f857889-tt9xj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia52cd0b37db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:19.105219 containerd[1393]: 2024-07-02 06:58:19.085 [INFO][4311] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" Namespace="calico-system" Pod="calico-kube-controllers-d8f857889-tt9xj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:19.105219 containerd[1393]: 2024-07-02 06:58:19.085 [INFO][4311] dataplane_linux.go 68: Setting the host side veth name to calia52cd0b37db ContainerID="0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" Namespace="calico-system" Pod="calico-kube-controllers-d8f857889-tt9xj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:19.105219 containerd[1393]: 2024-07-02 06:58:19.091 [INFO][4311] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" Namespace="calico-system" Pod="calico-kube-controllers-d8f857889-tt9xj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:19.105219 containerd[1393]: 2024-07-02 06:58:19.091 [INFO][4311] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" Namespace="calico-system" Pod="calico-kube-controllers-d8f857889-tt9xj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0", GenerateName:"calico-kube-controllers-d8f857889-", Namespace:"calico-system", SelfLink:"", UID:"9c276f6d-ef93-47c1-b679-37a0af5e9a64", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d8f857889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a", Pod:"calico-kube-controllers-d8f857889-tt9xj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia52cd0b37db", MAC:"52:4b:9d:36:4e:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:19.105219 containerd[1393]: 2024-07-02 06:58:19.100 [INFO][4311] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a" Namespace="calico-system" Pod="calico-kube-controllers-d8f857889-tt9xj" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:19.115000 audit[4405]: NETFILTER_CFG table=filter:107 family=2 entries=38 op=nft_register_chain pid=4405 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:58:19.115000 audit[4405]: SYSCALL arch=c000003e syscall=46 success=yes exit=19828 a0=3 a1=7ffe881f9480 a2=0 a3=7ffe881f946c items=0 ppid=3726 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:19.115000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:58:19.127304 containerd[1393]: time="2024-07-02T06:58:19.127072734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:58:19.127304 containerd[1393]: time="2024-07-02T06:58:19.127118780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:58:19.127304 containerd[1393]: time="2024-07-02T06:58:19.127135201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:58:19.127304 containerd[1393]: time="2024-07-02T06:58:19.127146772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:58:19.128031 systemd-networkd[1177]: calid275bb25cf5: Link UP Jul 2 06:58:19.129486 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid275bb25cf5: link becomes ready Jul 2 06:58:19.129429 systemd-networkd[1177]: calid275bb25cf5: Gained carrier Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.022 [INFO][4326] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--5dd5756b68--5tvlc-eth0 coredns-5dd5756b68- kube-system 48ddd97c-85ff-49ef-8095-30d9677f14bd 868 0 2024-07-02 06:57:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-5dd5756b68-5tvlc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid275bb25cf5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" Namespace="kube-system" Pod="coredns-5dd5756b68-5tvlc" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5tvlc-" Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.022 [INFO][4326] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" Namespace="kube-system" Pod="coredns-5dd5756b68-5tvlc" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.054 [INFO][4368] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" HandleID="k8s-pod-network.4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" Workload="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.080 [INFO][4368] ipam_plugin.go 264: Auto assigning IP ContainerID="4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" HandleID="k8s-pod-network.4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" Workload="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ddde0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-5dd5756b68-5tvlc", "timestamp":"2024-07-02 06:58:19.054077883 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.080 [INFO][4368] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.081 [INFO][4368] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.081 [INFO][4368] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.083 [INFO][4368] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" host="localhost" Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.094 [INFO][4368] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.100 [INFO][4368] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.107 [INFO][4368] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.110 [INFO][4368] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.110 [INFO][4368] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" host="localhost" Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.112 [INFO][4368] ipam.go 1685: Creating new handle: k8s-pod-network.4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6 Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.116 [INFO][4368] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" host="localhost" Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.122 [INFO][4368] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" host="localhost" Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.122 [INFO][4368] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" host="localhost" Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.122 [INFO][4368] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:19.145037 containerd[1393]: 2024-07-02 06:58:19.122 [INFO][4368] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" HandleID="k8s-pod-network.4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" Workload="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:19.145638 containerd[1393]: 2024-07-02 06:58:19.124 [INFO][4326] k8s.go 386: Populated endpoint ContainerID="4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" Namespace="kube-system" Pod="coredns-5dd5756b68-5tvlc" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--5tvlc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"48ddd97c-85ff-49ef-8095-30d9677f14bd", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-5dd5756b68-5tvlc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid275bb25cf5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:19.145638 containerd[1393]: 2024-07-02 06:58:19.124 [INFO][4326] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" Namespace="kube-system" Pod="coredns-5dd5756b68-5tvlc" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:19.145638 containerd[1393]: 2024-07-02 06:58:19.124 [INFO][4326] dataplane_linux.go 68: Setting the host side veth name to calid275bb25cf5 ContainerID="4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" Namespace="kube-system" Pod="coredns-5dd5756b68-5tvlc" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:19.145638 containerd[1393]: 2024-07-02 06:58:19.129 [INFO][4326] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" Namespace="kube-system" Pod="coredns-5dd5756b68-5tvlc" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:19.145638 containerd[1393]: 2024-07-02 06:58:19.130 [INFO][4326] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" Namespace="kube-system" Pod="coredns-5dd5756b68-5tvlc" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--5tvlc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"48ddd97c-85ff-49ef-8095-30d9677f14bd", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6", Pod:"coredns-5dd5756b68-5tvlc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid275bb25cf5", MAC:"56:16:bb:36:24:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:19.145638 containerd[1393]: 2024-07-02 06:58:19.138 [INFO][4326] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6" Namespace="kube-system" Pod="coredns-5dd5756b68-5tvlc" WorkloadEndpoint="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:19.158000 audit[4452]: NETFILTER_CFG table=filter:108 family=2 entries=38 op=nft_register_chain pid=4452 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:58:19.158000 audit[4452]: SYSCALL arch=c000003e syscall=46 success=yes exit=19408 a0=3 a1=7ffcb4f957d0 a2=0 a3=7ffcb4f957bc items=0 ppid=3726 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:19.158000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:58:19.161640 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 06:58:19.177041 containerd[1393]: time="2024-07-02T06:58:19.176968710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:58:19.177609 containerd[1393]: time="2024-07-02T06:58:19.177022381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:58:19.177609 containerd[1393]: time="2024-07-02T06:58:19.177038691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:58:19.177609 containerd[1393]: time="2024-07-02T06:58:19.177050764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:58:19.198538 containerd[1393]: time="2024-07-02T06:58:19.198485293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d8f857889-tt9xj,Uid:9c276f6d-ef93-47c1-b679-37a0af5e9a64,Namespace:calico-system,Attempt:1,} returns sandbox id \"0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a\"" Jul 2 06:58:19.204840 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 06:58:19.233739 containerd[1393]: time="2024-07-02T06:58:19.233677304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-5tvlc,Uid:48ddd97c-85ff-49ef-8095-30d9677f14bd,Namespace:kube-system,Attempt:1,} returns sandbox id \"4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6\"" Jul 2 06:58:19.234611 kubelet[2418]: E0702 06:58:19.234590 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:19.237299 containerd[1393]: time="2024-07-02T06:58:19.237049834Z" level=info msg="CreateContainer within sandbox \"4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 06:58:19.260271 containerd[1393]: time="2024-07-02T06:58:19.260214279Z" level=info msg="CreateContainer within sandbox \"4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c80cb0bd2b0d8876ce3df1db08126446a717469944c436dd29119fbe1ed8c90\"" Jul 2 06:58:19.260841 containerd[1393]: time="2024-07-02T06:58:19.260818423Z" level=info msg="StartContainer for \"3c80cb0bd2b0d8876ce3df1db08126446a717469944c436dd29119fbe1ed8c90\"" Jul 2 06:58:19.309788 containerd[1393]: time="2024-07-02T06:58:19.307511319Z" level=info msg="StartContainer for \"3c80cb0bd2b0d8876ce3df1db08126446a717469944c436dd29119fbe1ed8c90\" returns successfully" Jul 2 06:58:19.804659 kubelet[2418]: E0702 06:58:19.804466 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:19.808196 kubelet[2418]: E0702 06:58:19.808107 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:19.815178 kubelet[2418]: I0702 06:58:19.815137 2418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5tvlc" podStartSLOduration=37.815088381 podCreationTimestamp="2024-07-02 06:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:58:19.814590606 +0000 UTC m=+50.352808663" watchObservedRunningTime="2024-07-02 06:58:19.815088381 +0000 UTC m=+50.353306448" Jul 2 06:58:19.826000 audit[4559]: NETFILTER_CFG table=filter:109 family=2 entries=8 op=nft_register_rule pid=4559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:19.826000 audit[4559]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd6782a5c0 a2=0 a3=7ffd6782a5ac items=0 ppid=2608 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:19.826000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:19.827000 audit[4559]: NETFILTER_CFG table=nat:110 family=2 entries=44 op=nft_register_rule pid=4559 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:19.827000 audit[4559]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd6782a5c0 a2=0 a3=7ffd6782a5ac items=0 ppid=2608 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:19.827000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:20.356530 systemd-networkd[1177]: calid275bb25cf5: Gained IPv6LL Jul 2 06:58:20.483567 systemd-networkd[1177]: calia52cd0b37db: Gained IPv6LL Jul 2 06:58:20.810195 kubelet[2418]: E0702 06:58:20.810076 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:20.840000 audit[4567]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=4567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:20.850483 kernel: kauditd_printk_skb: 53 callbacks suppressed Jul 2 06:58:20.850630 kernel: audit: type=1325 audit(1719903500.840:350): table=filter:111 family=2 entries=8 op=nft_register_rule pid=4567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:20.840000 audit[4567]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd58d19860 a2=0 a3=7ffd58d1984c items=0 ppid=2608 pid=4567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:20.857025 kernel: audit: type=1300 audit(1719903500.840:350): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd58d19860 a2=0 a3=7ffd58d1984c items=0 ppid=2608 pid=4567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:20.857156 kernel: audit: type=1327 audit(1719903500.840:350): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:20.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:20.894000 audit[4567]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:20.894000 audit[4567]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd58d19860 a2=0 a3=7ffd58d1984c items=0 ppid=2608 pid=4567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:20.901502 kernel: audit: type=1325 audit(1719903500.894:351): table=nat:112 family=2 entries=56 op=nft_register_chain pid=4567 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:20.901664 kernel: audit: type=1300 audit(1719903500.894:351): arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd58d19860 a2=0 a3=7ffd58d1984c items=0 ppid=2608 pid=4567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:20.901691 kernel: audit: type=1327 audit(1719903500.894:351): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:20.894000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:21.198833 systemd[1]: Started sshd@13-10.0.0.85:22-10.0.0.1:44940.service - OpenSSH per-connection server daemon (10.0.0.1:44940). Jul 2 06:58:21.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:44940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:21.212476 kernel: audit: type=1130 audit(1719903501.198:352): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:44940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:21.314727 sshd[4573]: Accepted publickey for core from 10.0.0.1 port 44940 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:21.314000 audit[4573]: USER_ACCT pid=4573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:21.316725 sshd[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:21.322568 kernel: audit: type=1101 audit(1719903501.314:353): pid=4573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:21.322670 kernel: audit: type=1103 audit(1719903501.315:354): pid=4573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:21.315000 audit[4573]: CRED_ACQ pid=4573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:21.325137 kernel: audit: type=1006 audit(1719903501.315:355): pid=4573 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 2 06:58:21.315000 audit[4573]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff9d37c880 a2=3 a3=7f9a49638480 items=0 ppid=1 pid=4573 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:21.325338 systemd-logind[1375]: New session 14 of user core. Jul 2 06:58:21.315000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:21.329708 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 06:58:21.335000 audit[4573]: USER_START pid=4573 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:21.336000 audit[4576]: CRED_ACQ pid=4576 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:21.367465 kubelet[2418]: I0702 06:58:21.367399 2418 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 06:58:21.368304 kubelet[2418]: E0702 06:58:21.368242 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:21.428295 systemd[1]: run-containerd-runc-k8s.io-b0e9c0b6e26c0b74cc8eb12c2b61acc4385a393258fb98eccefb5d5c7ec407a0-runc.8sDBdY.mount: Deactivated successfully. Jul 2 06:58:21.469260 sshd[4573]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:21.470000 audit[4573]: USER_END pid=4573 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:21.470000 audit[4573]: CRED_DISP pid=4573 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:21.472522 systemd-logind[1375]: Session 14 logged out. Waiting for processes to exit. Jul 2 06:58:21.472695 systemd[1]: sshd@13-10.0.0.85:22-10.0.0.1:44940.service: Deactivated successfully. Jul 2 06:58:21.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.85:22-10.0.0.1:44940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:21.473640 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 06:58:21.474647 systemd-logind[1375]: Removed session 14. Jul 2 06:58:21.648210 containerd[1393]: time="2024-07-02T06:58:21.648169367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:21.693391 containerd[1393]: time="2024-07-02T06:58:21.693295684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 06:58:21.740684 containerd[1393]: time="2024-07-02T06:58:21.740558911Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:21.797316 containerd[1393]: time="2024-07-02T06:58:21.797262285Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:21.812590 kubelet[2418]: E0702 06:58:21.812567 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:21.859957 containerd[1393]: time="2024-07-02T06:58:21.859882391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:21.860761 containerd[1393]: time="2024-07-02T06:58:21.860707549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.777248251s" Jul 2 06:58:21.860832 containerd[1393]: time="2024-07-02T06:58:21.860772442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 06:58:21.862044 containerd[1393]: time="2024-07-02T06:58:21.862020504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 06:58:21.862962 containerd[1393]: time="2024-07-02T06:58:21.862876049Z" level=info msg="CreateContainer within sandbox \"9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 06:58:22.134253 containerd[1393]: time="2024-07-02T06:58:22.134187860Z" level=info msg="CreateContainer within sandbox \"9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4db2a7632898340aee0b051efec4a803548dec774fdbef25cea1026ab7652989\"" Jul 2 06:58:22.134875 containerd[1393]: time="2024-07-02T06:58:22.134846536Z" level=info msg="StartContainer for \"4db2a7632898340aee0b051efec4a803548dec774fdbef25cea1026ab7652989\"" Jul 2 06:58:22.182747 containerd[1393]: time="2024-07-02T06:58:22.182697792Z" level=info msg="StartContainer for \"4db2a7632898340aee0b051efec4a803548dec774fdbef25cea1026ab7652989\" returns successfully" Jul 2 06:58:22.752551 kubelet[2418]: I0702 06:58:22.752520 2418 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 06:58:22.752551 kubelet[2418]: I0702 06:58:22.752549 2418 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 06:58:24.842388 containerd[1393]: time="2024-07-02T06:58:24.842317655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:24.843325 containerd[1393]: time="2024-07-02T06:58:24.843273559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 06:58:24.844598 containerd[1393]: time="2024-07-02T06:58:24.844567797Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:24.856615 containerd[1393]: time="2024-07-02T06:58:24.856568355Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:24.859156 containerd[1393]: time="2024-07-02T06:58:24.859093724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:58:24.859932 containerd[1393]: time="2024-07-02T06:58:24.859873317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.997677495s" Jul 2 06:58:24.859997 containerd[1393]: time="2024-07-02T06:58:24.859928982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 06:58:24.866339 containerd[1393]: time="2024-07-02T06:58:24.866278566Z" level=info msg="CreateContainer within sandbox \"0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 06:58:24.885247 containerd[1393]: time="2024-07-02T06:58:24.885206553Z" level=info msg="CreateContainer within sandbox \"0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9cd996a63cabf2855e65b3d55ab2ba69025140b51065bf3f23cf96c5023c9dd1\"" Jul 2 06:58:24.886008 containerd[1393]: time="2024-07-02T06:58:24.885982930Z" level=info msg="StartContainer for \"9cd996a63cabf2855e65b3d55ab2ba69025140b51065bf3f23cf96c5023c9dd1\"" Jul 2 06:58:24.946623 containerd[1393]: time="2024-07-02T06:58:24.946569617Z" level=info msg="StartContainer for \"9cd996a63cabf2855e65b3d55ab2ba69025140b51065bf3f23cf96c5023c9dd1\" returns successfully" Jul 2 06:58:25.835218 kubelet[2418]: I0702 06:58:25.834980 2418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-d8f857889-tt9xj" podStartSLOduration=30.174698449 podCreationTimestamp="2024-07-02 06:57:50 +0000 UTC" firstStartedPulling="2024-07-02 06:58:19.19995903 +0000 UTC m=+49.738177087" lastFinishedPulling="2024-07-02 06:58:24.860202284 +0000 UTC m=+55.398420351" observedRunningTime="2024-07-02 06:58:25.834820445 +0000 UTC m=+56.373038502" watchObservedRunningTime="2024-07-02 06:58:25.834941713 +0000 UTC m=+56.373159770" Jul 2 06:58:25.835218 kubelet[2418]: I0702 06:58:25.835085 2418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-hth2l" podStartSLOduration=30.180484805 podCreationTimestamp="2024-07-02 06:57:50 +0000 UTC" firstStartedPulling="2024-07-02 06:58:16.206620066 +0000 UTC m=+46.744838123" lastFinishedPulling="2024-07-02 06:58:21.861203641 +0000 UTC m=+52.399421698" observedRunningTime="2024-07-02 06:58:22.980158713 +0000 UTC m=+53.518376780" watchObservedRunningTime="2024-07-02 06:58:25.83506838 +0000 UTC m=+56.373286437" Jul 2 06:58:26.480929 systemd[1]: Started sshd@14-10.0.0.85:22-10.0.0.1:58958.service - OpenSSH per-connection server daemon (10.0.0.1:58958). Jul 2 06:58:26.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:58958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:26.481990 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 2 06:58:26.482057 kernel: audit: type=1130 audit(1719903506.480:361): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:58958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:26.511000 audit[4734]: USER_ACCT pid=4734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:26.512511 sshd[4734]: Accepted publickey for core from 10.0.0.1 port 58958 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:26.522420 sshd[4734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:26.512000 audit[4734]: CRED_ACQ pid=4734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:26.528044 systemd-logind[1375]: New session 15 of user core. Jul 2 06:58:26.539946 kernel: audit: type=1101 audit(1719903506.511:362): pid=4734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:26.540021 kernel: audit: type=1103 audit(1719903506.512:363): pid=4734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:26.540062 kernel: audit: type=1006 audit(1719903506.512:364): pid=4734 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 2 06:58:26.540087 kernel: audit: type=1300 audit(1719903506.512:364): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd98135930 a2=3 a3=7fa80771e480 items=0 ppid=1 pid=4734 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:26.540111 kernel: audit: type=1327 audit(1719903506.512:364): proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:26.512000 audit[4734]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd98135930 a2=3 a3=7fa80771e480 items=0 ppid=1 pid=4734 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:26.512000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:26.539765 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 06:58:26.545000 audit[4734]: USER_START pid=4734 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:26.547000 audit[4737]: CRED_ACQ pid=4737 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:26.553809 kernel: audit: type=1105 audit(1719903506.545:365): pid=4734 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:26.553890 kernel: audit: type=1103 audit(1719903506.547:366): pid=4737 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:26.666017 sshd[4734]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:26.666000 audit[4734]: USER_END pid=4734 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:26.668985 systemd[1]: sshd@14-10.0.0.85:22-10.0.0.1:58958.service: Deactivated successfully. Jul 2 06:58:26.669846 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 06:58:26.670840 systemd-logind[1375]: Session 15 logged out. Waiting for processes to exit. Jul 2 06:58:26.666000 audit[4734]: CRED_DISP pid=4734 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:26.672074 systemd-logind[1375]: Removed session 15. Jul 2 06:58:26.674734 kernel: audit: type=1106 audit(1719903506.666:367): pid=4734 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:26.674789 kernel: audit: type=1104 audit(1719903506.666:368): pid=4734 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:26.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.85:22-10.0.0.1:58958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:29.548778 containerd[1393]: time="2024-07-02T06:58:29.548729654Z" level=info msg="StopPodSandbox for \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\"" Jul 2 06:58:29.549259 containerd[1393]: time="2024-07-02T06:58:29.548848724Z" level=info msg="TearDown network for sandbox \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\" successfully" Jul 2 06:58:29.549259 containerd[1393]: time="2024-07-02T06:58:29.548904142Z" level=info msg="StopPodSandbox for \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\" returns successfully" Jul 2 06:58:29.549259 containerd[1393]: time="2024-07-02T06:58:29.549251855Z" level=info msg="RemovePodSandbox for \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\"" Jul 2 06:58:29.558659 containerd[1393]: time="2024-07-02T06:58:29.552273024Z" level=info msg="Forcibly stopping sandbox \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\"" Jul 2 06:58:29.558778 containerd[1393]: time="2024-07-02T06:58:29.558698927Z" level=info msg="TearDown network for sandbox \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\" successfully" Jul 2 06:58:29.638941 containerd[1393]: time="2024-07-02T06:58:29.638894542Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:58:29.639136 containerd[1393]: time="2024-07-02T06:58:29.638983414Z" level=info msg="RemovePodSandbox \"cf58bf6aa5ca93bface11b45acfb999fbea7ead6a8db33377a482d47b64cddad\" returns successfully" Jul 2 06:58:29.639551 containerd[1393]: time="2024-07-02T06:58:29.639519092Z" level=info msg="StopPodSandbox for \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\"" Jul 2 06:58:29.731936 containerd[1393]: 2024-07-02 06:58:29.698 [WARNING][4769] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hth2l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3da56065-eacb-45a3-bb8d-c1271ca90971", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65", Pod:"csi-node-driver-hth2l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib6e094e7685", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:29.731936 containerd[1393]: 2024-07-02 06:58:29.698 [INFO][4769] k8s.go 608: Cleaning up netns ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:29.731936 containerd[1393]: 2024-07-02 06:58:29.698 [INFO][4769] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" iface="eth0" netns="" Jul 2 06:58:29.731936 containerd[1393]: 2024-07-02 06:58:29.698 [INFO][4769] k8s.go 615: Releasing IP address(es) ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:29.731936 containerd[1393]: 2024-07-02 06:58:29.698 [INFO][4769] utils.go 188: Calico CNI releasing IP address ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:29.731936 containerd[1393]: 2024-07-02 06:58:29.723 [INFO][4777] ipam_plugin.go 411: Releasing address using handleID ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" HandleID="k8s-pod-network.ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Workload="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:29.731936 containerd[1393]: 2024-07-02 06:58:29.723 [INFO][4777] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:29.731936 containerd[1393]: 2024-07-02 06:58:29.724 [INFO][4777] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:29.731936 containerd[1393]: 2024-07-02 06:58:29.728 [WARNING][4777] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" HandleID="k8s-pod-network.ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Workload="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:29.731936 containerd[1393]: 2024-07-02 06:58:29.728 [INFO][4777] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" HandleID="k8s-pod-network.ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Workload="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:29.731936 containerd[1393]: 2024-07-02 06:58:29.729 [INFO][4777] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:29.731936 containerd[1393]: 2024-07-02 06:58:29.730 [INFO][4769] k8s.go 621: Teardown processing complete. ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:29.732497 containerd[1393]: time="2024-07-02T06:58:29.731982530Z" level=info msg="TearDown network for sandbox \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\" successfully" Jul 2 06:58:29.732497 containerd[1393]: time="2024-07-02T06:58:29.732018680Z" level=info msg="StopPodSandbox for \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\" returns successfully" Jul 2 06:58:29.732497 containerd[1393]: time="2024-07-02T06:58:29.732338650Z" level=info msg="RemovePodSandbox for \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\"" Jul 2 06:58:29.732581 containerd[1393]: time="2024-07-02T06:58:29.732364831Z" level=info msg="Forcibly stopping sandbox \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\"" Jul 2 06:58:30.171096 containerd[1393]: 2024-07-02 06:58:29.994 [WARNING][4799] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--hth2l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3da56065-eacb-45a3-bb8d-c1271ca90971", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9b77cc4c2c6ab15d68465eb42e63fb769ef322ebb921d0cc4558c353b0d5ee65", Pod:"csi-node-driver-hth2l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib6e094e7685", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:30.171096 containerd[1393]: 2024-07-02 06:58:29.994 [INFO][4799] k8s.go 608: Cleaning up netns ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:30.171096 containerd[1393]: 2024-07-02 06:58:29.994 [INFO][4799] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" iface="eth0" netns="" Jul 2 06:58:30.171096 containerd[1393]: 2024-07-02 06:58:29.994 [INFO][4799] k8s.go 615: Releasing IP address(es) ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:30.171096 containerd[1393]: 2024-07-02 06:58:29.994 [INFO][4799] utils.go 188: Calico CNI releasing IP address ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:30.171096 containerd[1393]: 2024-07-02 06:58:30.096 [INFO][4812] ipam_plugin.go 411: Releasing address using handleID ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" HandleID="k8s-pod-network.ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Workload="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:30.171096 containerd[1393]: 2024-07-02 06:58:30.096 [INFO][4812] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:30.171096 containerd[1393]: 2024-07-02 06:58:30.096 [INFO][4812] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:30.171096 containerd[1393]: 2024-07-02 06:58:30.167 [WARNING][4812] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" HandleID="k8s-pod-network.ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Workload="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:30.171096 containerd[1393]: 2024-07-02 06:58:30.167 [INFO][4812] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" HandleID="k8s-pod-network.ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Workload="localhost-k8s-csi--node--driver--hth2l-eth0" Jul 2 06:58:30.171096 containerd[1393]: 2024-07-02 06:58:30.168 [INFO][4812] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:30.171096 containerd[1393]: 2024-07-02 06:58:30.169 [INFO][4799] k8s.go 621: Teardown processing complete. ContainerID="ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf" Jul 2 06:58:30.171638 containerd[1393]: time="2024-07-02T06:58:30.171130016Z" level=info msg="TearDown network for sandbox \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\" successfully" Jul 2 06:58:30.582174 containerd[1393]: time="2024-07-02T06:58:30.581996704Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:58:30.582174 containerd[1393]: time="2024-07-02T06:58:30.582124180Z" level=info msg="RemovePodSandbox \"ae2c41e2ca2e9bd67e7610469537d0f09b570340fe27f85a7a0c5c3a5a284bbf\" returns successfully" Jul 2 06:58:30.582661 containerd[1393]: time="2024-07-02T06:58:30.582626111Z" level=info msg="StopPodSandbox for \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\"" Jul 2 06:58:30.662071 containerd[1393]: 2024-07-02 06:58:30.619 [WARNING][4835] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--5tvlc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"48ddd97c-85ff-49ef-8095-30d9677f14bd", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6", Pod:"coredns-5dd5756b68-5tvlc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid275bb25cf5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:30.662071 containerd[1393]: 2024-07-02 06:58:30.619 [INFO][4835] k8s.go 608: Cleaning up netns ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:30.662071 containerd[1393]: 2024-07-02 06:58:30.619 [INFO][4835] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" iface="eth0" netns="" Jul 2 06:58:30.662071 containerd[1393]: 2024-07-02 06:58:30.619 [INFO][4835] k8s.go 615: Releasing IP address(es) ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:30.662071 containerd[1393]: 2024-07-02 06:58:30.619 [INFO][4835] utils.go 188: Calico CNI releasing IP address ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:30.662071 containerd[1393]: 2024-07-02 06:58:30.637 [INFO][4842] ipam_plugin.go 411: Releasing address using handleID ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" HandleID="k8s-pod-network.353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Workload="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:30.662071 containerd[1393]: 2024-07-02 06:58:30.637 [INFO][4842] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:30.662071 containerd[1393]: 2024-07-02 06:58:30.637 [INFO][4842] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:30.662071 containerd[1393]: 2024-07-02 06:58:30.657 [WARNING][4842] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" HandleID="k8s-pod-network.353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Workload="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:30.662071 containerd[1393]: 2024-07-02 06:58:30.657 [INFO][4842] ipam_plugin.go 439: Releasing address using workloadID ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" HandleID="k8s-pod-network.353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Workload="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:30.662071 containerd[1393]: 2024-07-02 06:58:30.658 [INFO][4842] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:30.662071 containerd[1393]: 2024-07-02 06:58:30.660 [INFO][4835] k8s.go 621: Teardown processing complete. ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:30.662582 containerd[1393]: time="2024-07-02T06:58:30.662131315Z" level=info msg="TearDown network for sandbox \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\" successfully" Jul 2 06:58:30.662582 containerd[1393]: time="2024-07-02T06:58:30.662160161Z" level=info msg="StopPodSandbox for \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\" returns successfully" Jul 2 06:58:30.662737 containerd[1393]: time="2024-07-02T06:58:30.662682782Z" level=info msg="RemovePodSandbox for \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\"" Jul 2 06:58:30.662805 containerd[1393]: time="2024-07-02T06:58:30.662744031Z" level=info msg="Forcibly stopping sandbox \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\"" Jul 2 06:58:30.774915 containerd[1393]: 2024-07-02 06:58:30.740 [WARNING][4865] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--5tvlc-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"48ddd97c-85ff-49ef-8095-30d9677f14bd", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4587f0c23b72a44b10810cc4eb3bf3decf0276bf975688bf35c268b0d923abc6", Pod:"coredns-5dd5756b68-5tvlc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid275bb25cf5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:30.774915 containerd[1393]: 2024-07-02 06:58:30.741 [INFO][4865] k8s.go 608: Cleaning up netns ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:30.774915 containerd[1393]: 2024-07-02 06:58:30.741 [INFO][4865] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" iface="eth0" netns="" Jul 2 06:58:30.774915 containerd[1393]: 2024-07-02 06:58:30.741 [INFO][4865] k8s.go 615: Releasing IP address(es) ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:30.774915 containerd[1393]: 2024-07-02 06:58:30.741 [INFO][4865] utils.go 188: Calico CNI releasing IP address ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:30.774915 containerd[1393]: 2024-07-02 06:58:30.765 [INFO][4879] ipam_plugin.go 411: Releasing address using handleID ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" HandleID="k8s-pod-network.353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Workload="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:30.774915 containerd[1393]: 2024-07-02 06:58:30.765 [INFO][4879] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:30.774915 containerd[1393]: 2024-07-02 06:58:30.765 [INFO][4879] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:30.774915 containerd[1393]: 2024-07-02 06:58:30.770 [WARNING][4879] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" HandleID="k8s-pod-network.353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Workload="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:30.774915 containerd[1393]: 2024-07-02 06:58:30.770 [INFO][4879] ipam_plugin.go 439: Releasing address using workloadID ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" HandleID="k8s-pod-network.353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Workload="localhost-k8s-coredns--5dd5756b68--5tvlc-eth0" Jul 2 06:58:30.774915 containerd[1393]: 2024-07-02 06:58:30.772 [INFO][4879] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:30.774915 containerd[1393]: 2024-07-02 06:58:30.773 [INFO][4865] k8s.go 621: Teardown processing complete. ContainerID="353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc" Jul 2 06:58:30.775361 containerd[1393]: time="2024-07-02T06:58:30.774957803Z" level=info msg="TearDown network for sandbox \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\" successfully" Jul 2 06:58:30.860249 containerd[1393]: time="2024-07-02T06:58:30.860192671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:58:30.860438 containerd[1393]: time="2024-07-02T06:58:30.860280842Z" level=info msg="RemovePodSandbox \"353bfc3819432faf3064eda28fb8bb96f149a838d71b330150e211b0c6286ccc\" returns successfully" Jul 2 06:58:30.860903 containerd[1393]: time="2024-07-02T06:58:30.860869601Z" level=info msg="StopPodSandbox for \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\"" Jul 2 06:58:30.923877 containerd[1393]: 2024-07-02 06:58:30.896 [WARNING][4903] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0", GenerateName:"calico-kube-controllers-d8f857889-", Namespace:"calico-system", SelfLink:"", UID:"9c276f6d-ef93-47c1-b679-37a0af5e9a64", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d8f857889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a", Pod:"calico-kube-controllers-d8f857889-tt9xj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia52cd0b37db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:30.923877 containerd[1393]: 2024-07-02 06:58:30.896 [INFO][4903] k8s.go 608: Cleaning up netns ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:30.923877 containerd[1393]: 2024-07-02 06:58:30.896 [INFO][4903] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" iface="eth0" netns="" Jul 2 06:58:30.923877 containerd[1393]: 2024-07-02 06:58:30.896 [INFO][4903] k8s.go 615: Releasing IP address(es) ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:30.923877 containerd[1393]: 2024-07-02 06:58:30.896 [INFO][4903] utils.go 188: Calico CNI releasing IP address ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:30.923877 containerd[1393]: 2024-07-02 06:58:30.914 [INFO][4911] ipam_plugin.go 411: Releasing address using handleID ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" HandleID="k8s-pod-network.3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Workload="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:30.923877 containerd[1393]: 2024-07-02 06:58:30.914 [INFO][4911] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:30.923877 containerd[1393]: 2024-07-02 06:58:30.914 [INFO][4911] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:30.923877 containerd[1393]: 2024-07-02 06:58:30.919 [WARNING][4911] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" HandleID="k8s-pod-network.3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Workload="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:30.923877 containerd[1393]: 2024-07-02 06:58:30.919 [INFO][4911] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" HandleID="k8s-pod-network.3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Workload="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:30.923877 containerd[1393]: 2024-07-02 06:58:30.921 [INFO][4911] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:30.923877 containerd[1393]: 2024-07-02 06:58:30.922 [INFO][4903] k8s.go 621: Teardown processing complete. ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:30.924403 containerd[1393]: time="2024-07-02T06:58:30.923909560Z" level=info msg="TearDown network for sandbox \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\" successfully" Jul 2 06:58:30.924403 containerd[1393]: time="2024-07-02T06:58:30.923948947Z" level=info msg="StopPodSandbox for \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\" returns successfully" Jul 2 06:58:30.924475 containerd[1393]: time="2024-07-02T06:58:30.924444125Z" level=info msg="RemovePodSandbox for \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\"" Jul 2 06:58:30.924514 containerd[1393]: time="2024-07-02T06:58:30.924475014Z" level=info msg="Forcibly stopping sandbox \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\"" Jul 2 06:58:30.979215 containerd[1393]: 2024-07-02 06:58:30.952 [WARNING][4934] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0", GenerateName:"calico-kube-controllers-d8f857889-", Namespace:"calico-system", SelfLink:"", UID:"9c276f6d-ef93-47c1-b679-37a0af5e9a64", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d8f857889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a6bc01b32160910aca3218a1496e70cc2672baff1776c0f0e8067b753ff893a", Pod:"calico-kube-controllers-d8f857889-tt9xj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia52cd0b37db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:30.979215 containerd[1393]: 2024-07-02 06:58:30.953 [INFO][4934] k8s.go 608: Cleaning up netns ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:30.979215 containerd[1393]: 2024-07-02 06:58:30.953 [INFO][4934] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" iface="eth0" netns="" Jul 2 06:58:30.979215 containerd[1393]: 2024-07-02 06:58:30.953 [INFO][4934] k8s.go 615: Releasing IP address(es) ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:30.979215 containerd[1393]: 2024-07-02 06:58:30.953 [INFO][4934] utils.go 188: Calico CNI releasing IP address ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:30.979215 containerd[1393]: 2024-07-02 06:58:30.969 [INFO][4941] ipam_plugin.go 411: Releasing address using handleID ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" HandleID="k8s-pod-network.3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Workload="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:30.979215 containerd[1393]: 2024-07-02 06:58:30.969 [INFO][4941] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:30.979215 containerd[1393]: 2024-07-02 06:58:30.969 [INFO][4941] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:30.979215 containerd[1393]: 2024-07-02 06:58:30.974 [WARNING][4941] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" HandleID="k8s-pod-network.3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Workload="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:30.979215 containerd[1393]: 2024-07-02 06:58:30.974 [INFO][4941] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" HandleID="k8s-pod-network.3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Workload="localhost-k8s-calico--kube--controllers--d8f857889--tt9xj-eth0" Jul 2 06:58:30.979215 containerd[1393]: 2024-07-02 06:58:30.975 [INFO][4941] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:30.979215 containerd[1393]: 2024-07-02 06:58:30.977 [INFO][4934] k8s.go 621: Teardown processing complete. ContainerID="3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70" Jul 2 06:58:30.979709 containerd[1393]: time="2024-07-02T06:58:30.979258992Z" level=info msg="TearDown network for sandbox \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\" successfully" Jul 2 06:58:31.117342 containerd[1393]: time="2024-07-02T06:58:31.116570671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:58:31.117342 containerd[1393]: time="2024-07-02T06:58:31.116668490Z" level=info msg="RemovePodSandbox \"3084b9c68d4099d776ba2cb317573c07cac8e633c13c6335b65e6167150d4c70\" returns successfully" Jul 2 06:58:31.117342 containerd[1393]: time="2024-07-02T06:58:31.117206180Z" level=info msg="StopPodSandbox for \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\"" Jul 2 06:58:31.175355 containerd[1393]: 2024-07-02 06:58:31.148 [WARNING][4963] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--22sns-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9c333969-8c44-45e3-a4bd-3452f33a72a4", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a", Pod:"coredns-5dd5756b68-22sns", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f8bf3f0a98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:31.175355 containerd[1393]: 2024-07-02 06:58:31.149 [INFO][4963] k8s.go 608: Cleaning up netns ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:31.175355 containerd[1393]: 2024-07-02 06:58:31.149 [INFO][4963] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" iface="eth0" netns="" Jul 2 06:58:31.175355 containerd[1393]: 2024-07-02 06:58:31.149 [INFO][4963] k8s.go 615: Releasing IP address(es) ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:31.175355 containerd[1393]: 2024-07-02 06:58:31.149 [INFO][4963] utils.go 188: Calico CNI releasing IP address ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:31.175355 containerd[1393]: 2024-07-02 06:58:31.166 [INFO][4971] ipam_plugin.go 411: Releasing address using handleID ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" HandleID="k8s-pod-network.12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Workload="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:31.175355 containerd[1393]: 2024-07-02 06:58:31.166 [INFO][4971] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:31.175355 containerd[1393]: 2024-07-02 06:58:31.166 [INFO][4971] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:31.175355 containerd[1393]: 2024-07-02 06:58:31.171 [WARNING][4971] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" HandleID="k8s-pod-network.12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Workload="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:31.175355 containerd[1393]: 2024-07-02 06:58:31.171 [INFO][4971] ipam_plugin.go 439: Releasing address using workloadID ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" HandleID="k8s-pod-network.12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Workload="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:31.175355 containerd[1393]: 2024-07-02 06:58:31.172 [INFO][4971] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:31.175355 containerd[1393]: 2024-07-02 06:58:31.174 [INFO][4963] k8s.go 621: Teardown processing complete. ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:31.188927 containerd[1393]: time="2024-07-02T06:58:31.175407042Z" level=info msg="TearDown network for sandbox \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\" successfully" Jul 2 06:58:31.188927 containerd[1393]: time="2024-07-02T06:58:31.175438052Z" level=info msg="StopPodSandbox for \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\" returns successfully" Jul 2 06:58:31.188927 containerd[1393]: time="2024-07-02T06:58:31.175874045Z" level=info msg="RemovePodSandbox for \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\"" Jul 2 06:58:31.188927 containerd[1393]: time="2024-07-02T06:58:31.175904925Z" level=info msg="Forcibly stopping sandbox \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\"" Jul 2 06:58:31.251097 containerd[1393]: 2024-07-02 06:58:31.211 [WARNING][4994] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--5dd5756b68--22sns-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"9c333969-8c44-45e3-a4bd-3452f33a72a4", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 57, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a40b755bd8fac2591d5186efebf5e04caa29fc1572b836a6d0d003f4f2dcb8a", Pod:"coredns-5dd5756b68-22sns", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f8bf3f0a98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:31.251097 containerd[1393]: 2024-07-02 06:58:31.212 [INFO][4994] k8s.go 608: Cleaning up netns ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:31.251097 containerd[1393]: 2024-07-02 06:58:31.212 [INFO][4994] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" iface="eth0" netns="" Jul 2 06:58:31.251097 containerd[1393]: 2024-07-02 06:58:31.212 [INFO][4994] k8s.go 615: Releasing IP address(es) ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:31.251097 containerd[1393]: 2024-07-02 06:58:31.212 [INFO][4994] utils.go 188: Calico CNI releasing IP address ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:31.251097 containerd[1393]: 2024-07-02 06:58:31.240 [INFO][5002] ipam_plugin.go 411: Releasing address using handleID ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" HandleID="k8s-pod-network.12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Workload="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:31.251097 containerd[1393]: 2024-07-02 06:58:31.241 [INFO][5002] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:31.251097 containerd[1393]: 2024-07-02 06:58:31.241 [INFO][5002] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:31.251097 containerd[1393]: 2024-07-02 06:58:31.246 [WARNING][5002] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" HandleID="k8s-pod-network.12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Workload="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:31.251097 containerd[1393]: 2024-07-02 06:58:31.246 [INFO][5002] ipam_plugin.go 439: Releasing address using workloadID ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" HandleID="k8s-pod-network.12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Workload="localhost-k8s-coredns--5dd5756b68--22sns-eth0" Jul 2 06:58:31.251097 containerd[1393]: 2024-07-02 06:58:31.248 [INFO][5002] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:31.251097 containerd[1393]: 2024-07-02 06:58:31.249 [INFO][4994] k8s.go 621: Teardown processing complete. ContainerID="12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f" Jul 2 06:58:31.251681 containerd[1393]: time="2024-07-02T06:58:31.251134984Z" level=info msg="TearDown network for sandbox \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\" successfully" Jul 2 06:58:31.421782 containerd[1393]: time="2024-07-02T06:58:31.421649339Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:58:31.421782 containerd[1393]: time="2024-07-02T06:58:31.421733562Z" level=info msg="RemovePodSandbox \"12fc54f0365054c043cfc1b2d7acfcec05776fe1d350ed0feb84511477a6415f\" returns successfully" Jul 2 06:58:31.678723 systemd[1]: Started sshd@15-10.0.0.85:22-10.0.0.1:58968.service - OpenSSH per-connection server daemon (10.0.0.1:58968). Jul 2 06:58:31.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:58968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:31.733489 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:58:31.733584 kernel: audit: type=1130 audit(1719903511.677:370): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:58968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:31.769000 audit[5010]: USER_ACCT pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:31.771097 sshd[5010]: Accepted publickey for core from 10.0.0.1 port 58968 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:31.771787 sshd[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:31.770000 audit[5010]: CRED_ACQ pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:31.800475 systemd-logind[1375]: New session 16 of user core. Jul 2 06:58:31.804406 kernel: audit: type=1101 audit(1719903511.769:371): pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:31.804583 kernel: audit: type=1103 audit(1719903511.770:372): pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:31.804608 kernel: audit: type=1006 audit(1719903511.770:373): pid=5010 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 2 06:58:31.806619 kernel: audit: type=1300 audit(1719903511.770:373): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0b0d0b70 a2=3 a3=7f144fea6480 items=0 ppid=1 pid=5010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:31.770000 audit[5010]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0b0d0b70 a2=3 a3=7f144fea6480 items=0 ppid=1 pid=5010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:31.810315 kernel: audit: type=1327 audit(1719903511.770:373): proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:31.770000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:31.823754 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 06:58:31.827000 audit[5010]: USER_START pid=5010 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:31.829000 audit[5013]: CRED_ACQ pid=5013 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:31.846031 kernel: audit: type=1105 audit(1719903511.827:374): pid=5010 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:31.846158 kernel: audit: type=1103 audit(1719903511.829:375): pid=5013 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:31.940527 sshd[5010]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:31.940000 audit[5010]: USER_END pid=5010 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:31.942797 systemd[1]: sshd@15-10.0.0.85:22-10.0.0.1:58968.service: Deactivated successfully. Jul 2 06:58:31.943867 systemd-logind[1375]: Session 16 logged out. Waiting for processes to exit. Jul 2 06:58:31.943937 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 06:58:31.944963 systemd-logind[1375]: Removed session 16. Jul 2 06:58:31.940000 audit[5010]: CRED_DISP pid=5010 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:31.959749 kernel: audit: type=1106 audit(1719903511.940:376): pid=5010 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:31.959868 kernel: audit: type=1104 audit(1719903511.940:377): pid=5010 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:31.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.85:22-10.0.0.1:58968 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:36.952749 systemd[1]: Started sshd@16-10.0.0.85:22-10.0.0.1:51934.service - OpenSSH per-connection server daemon (10.0.0.1:51934). Jul 2 06:58:36.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:51934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:36.953977 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:58:36.954019 kernel: audit: type=1130 audit(1719903516.951:379): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:51934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:36.975000 audit[5045]: USER_ACCT pid=5045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:36.977241 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 51934 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:36.978295 sshd[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:36.981503 systemd-logind[1375]: New session 17 of user core. Jul 2 06:58:36.976000 audit[5045]: CRED_ACQ pid=5045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:37.004553 kernel: audit: type=1101 audit(1719903516.975:380): pid=5045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:37.004607 kernel: audit: type=1103 audit(1719903516.976:381): pid=5045 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:37.004624 kernel: audit: type=1006 audit(1719903516.976:382): pid=5045 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jul 2 06:58:36.976000 audit[5045]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffad6bffa0 a2=3 a3=7f7d42687480 items=0 ppid=1 pid=5045 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:37.010382 kernel: audit: type=1300 audit(1719903516.976:382): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffad6bffa0 a2=3 a3=7f7d42687480 items=0 ppid=1 pid=5045 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:37.010438 kernel: audit: type=1327 audit(1719903516.976:382): proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:36.976000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:37.014807 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 06:58:37.018000 audit[5045]: USER_START pid=5045 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:37.019000 audit[5048]: CRED_ACQ pid=5048 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:37.026414 kernel: audit: type=1105 audit(1719903517.018:383): pid=5045 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:37.026442 kernel: audit: type=1103 audit(1719903517.019:384): pid=5048 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:37.120896 sshd[5045]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:37.120000 audit[5045]: USER_END pid=5045 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:37.123565 systemd[1]: sshd@16-10.0.0.85:22-10.0.0.1:51934.service: Deactivated successfully. Jul 2 06:58:37.124747 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 06:58:37.124935 systemd-logind[1375]: Session 17 logged out. Waiting for processes to exit. Jul 2 06:58:37.125910 systemd-logind[1375]: Removed session 17. Jul 2 06:58:37.120000 audit[5045]: CRED_DISP pid=5045 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:37.129423 kernel: audit: type=1106 audit(1719903517.120:385): pid=5045 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:37.129563 kernel: audit: type=1104 audit(1719903517.120:386): pid=5045 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:37.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.85:22-10.0.0.1:51934 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:42.132668 systemd[1]: Started sshd@17-10.0.0.85:22-10.0.0.1:51946.service - OpenSSH per-connection server daemon (10.0.0.1:51946). Jul 2 06:58:42.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.85:22-10.0.0.1:51946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:42.133638 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:58:42.133692 kernel: audit: type=1130 audit(1719903522.132:388): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.85:22-10.0.0.1:51946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:42.161000 audit[5066]: USER_ACCT pid=5066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.162305 sshd[5066]: Accepted publickey for core from 10.0.0.1 port 51946 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:42.163538 sshd[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:42.167039 systemd-logind[1375]: New session 18 of user core. Jul 2 06:58:42.162000 audit[5066]: CRED_ACQ pid=5066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.188811 kernel: audit: type=1101 audit(1719903522.161:389): pid=5066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.188972 kernel: audit: type=1103 audit(1719903522.162:390): pid=5066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.188993 kernel: audit: type=1006 audit(1719903522.162:391): pid=5066 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jul 2 06:58:42.162000 audit[5066]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff81b9ac70 a2=3 a3=7f66b410b480 items=0 ppid=1 pid=5066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:42.193673 kernel: audit: type=1300 audit(1719903522.162:391): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff81b9ac70 a2=3 a3=7f66b410b480 items=0 ppid=1 pid=5066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:42.193724 kernel: audit: type=1327 audit(1719903522.162:391): proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:42.162000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:42.196811 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 06:58:42.201000 audit[5066]: USER_START pid=5066 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.202000 audit[5069]: CRED_ACQ pid=5069 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.212973 kernel: audit: type=1105 audit(1719903522.201:392): pid=5066 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.213051 kernel: audit: type=1103 audit(1719903522.202:393): pid=5069 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.304617 sshd[5066]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:42.306000 audit[5066]: USER_END pid=5066 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.306000 audit[5066]: CRED_DISP pid=5066 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.313239 kernel: audit: type=1106 audit(1719903522.306:394): pid=5066 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.313293 kernel: audit: type=1104 audit(1719903522.306:395): pid=5066 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.313745 systemd[1]: Started sshd@18-10.0.0.85:22-10.0.0.1:51958.service - OpenSSH per-connection server daemon (10.0.0.1:51958). Jul 2 06:58:42.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.85:22-10.0.0.1:51958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:42.314326 systemd[1]: sshd@17-10.0.0.85:22-10.0.0.1:51946.service: Deactivated successfully. Jul 2 06:58:42.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.85:22-10.0.0.1:51946 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:42.316316 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 06:58:42.316545 systemd-logind[1375]: Session 18 logged out. Waiting for processes to exit. Jul 2 06:58:42.317692 systemd-logind[1375]: Removed session 18. Jul 2 06:58:42.340000 audit[5078]: USER_ACCT pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.341322 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 51958 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:42.341000 audit[5078]: CRED_ACQ pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.341000 audit[5078]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffceb993ad0 a2=3 a3=7fbf186fb480 items=0 ppid=1 pid=5078 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:42.341000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:42.342699 sshd[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:42.348767 systemd-logind[1375]: New session 19 of user core. Jul 2 06:58:42.353693 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 06:58:42.357000 audit[5078]: USER_START pid=5078 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.359000 audit[5083]: CRED_ACQ pid=5083 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.762441 sshd[5078]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:42.763000 audit[5078]: USER_END pid=5078 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.764000 audit[5078]: CRED_DISP pid=5078 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.771858 systemd[1]: Started sshd@19-10.0.0.85:22-10.0.0.1:57632.service - OpenSSH per-connection server daemon (10.0.0.1:57632). Jul 2 06:58:42.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.85:22-10.0.0.1:57632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:42.772634 systemd[1]: sshd@18-10.0.0.85:22-10.0.0.1:51958.service: Deactivated successfully. Jul 2 06:58:42.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.85:22-10.0.0.1:51958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:42.774035 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 06:58:42.774129 systemd-logind[1375]: Session 19 logged out. Waiting for processes to exit. Jul 2 06:58:42.775487 systemd-logind[1375]: Removed session 19. Jul 2 06:58:42.803000 audit[5090]: USER_ACCT pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.804198 sshd[5090]: Accepted publickey for core from 10.0.0.1 port 57632 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:42.804000 audit[5090]: CRED_ACQ pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.804000 audit[5090]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc441a7c20 a2=3 a3=7f908df6d480 items=0 ppid=1 pid=5090 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:42.804000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:42.805685 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:42.810387 systemd-logind[1375]: New session 20 of user core. Jul 2 06:58:42.821712 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 06:58:42.826000 audit[5090]: USER_START pid=5090 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:42.828000 audit[5095]: CRED_ACQ pid=5095 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:43.960000 audit[5106]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:43.960000 audit[5106]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff7fd5c140 a2=0 a3=7fff7fd5c12c items=0 ppid=2608 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:43.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:43.961000 audit[5106]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:43.961000 audit[5106]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff7fd5c140 a2=0 a3=0 items=0 ppid=2608 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:43.961000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:43.971181 sshd[5090]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:43.974000 audit[5090]: USER_END pid=5090 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:43.974000 audit[5090]: CRED_DISP pid=5090 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:43.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.85:22-10.0.0.1:57634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:43.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.85:22-10.0.0.1:57632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:43.977739 systemd[1]: Started sshd@20-10.0.0.85:22-10.0.0.1:57634.service - OpenSSH per-connection server daemon (10.0.0.1:57634). Jul 2 06:58:43.978172 systemd[1]: sshd@19-10.0.0.85:22-10.0.0.1:57632.service: Deactivated successfully. Jul 2 06:58:43.979996 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 06:58:43.980313 systemd-logind[1375]: Session 20 logged out. Waiting for processes to exit. Jul 2 06:58:43.980000 audit[5109]: NETFILTER_CFG table=filter:115 family=2 entries=32 op=nft_register_rule pid=5109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:43.980000 audit[5109]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffb6828480 a2=0 a3=7fffb682846c items=0 ppid=2608 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:43.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:43.981490 systemd-logind[1375]: Removed session 20. Jul 2 06:58:43.981000 audit[5109]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=5109 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:43.981000 audit[5109]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffb6828480 a2=0 a3=0 items=0 ppid=2608 pid=5109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:43.981000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:44.015000 audit[5108]: USER_ACCT pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:44.015972 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 57634 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:44.016000 audit[5108]: CRED_ACQ pid=5108 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:44.016000 audit[5108]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5e281ef0 a2=3 a3=7fa2e64dd480 items=0 ppid=1 pid=5108 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:44.016000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:44.017584 sshd[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:44.023765 systemd-logind[1375]: New session 21 of user core. Jul 2 06:58:44.028845 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 06:58:44.035000 audit[5108]: USER_START pid=5108 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:44.036000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:44.474048 sshd[5108]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:44.477000 audit[5108]: USER_END pid=5108 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:44.477000 audit[5108]: CRED_DISP pid=5108 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:44.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.85:22-10.0.0.1:57646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:44.479972 systemd[1]: Started sshd@21-10.0.0.85:22-10.0.0.1:57646.service - OpenSSH per-connection server daemon (10.0.0.1:57646). Jul 2 06:58:44.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.85:22-10.0.0.1:57634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:44.480867 systemd[1]: sshd@20-10.0.0.85:22-10.0.0.1:57634.service: Deactivated successfully. Jul 2 06:58:44.482204 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 06:58:44.482326 systemd-logind[1375]: Session 21 logged out. Waiting for processes to exit. Jul 2 06:58:44.484079 systemd-logind[1375]: Removed session 21. Jul 2 06:58:44.516000 audit[5122]: USER_ACCT pid=5122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:44.517038 sshd[5122]: Accepted publickey for core from 10.0.0.1 port 57646 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:44.517000 audit[5122]: CRED_ACQ pid=5122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:44.517000 audit[5122]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe83a64330 a2=3 a3=7f1d13a69480 items=0 ppid=1 pid=5122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:44.517000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:44.518514 sshd[5122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:44.522828 systemd-logind[1375]: New session 22 of user core. Jul 2 06:58:44.527830 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 06:58:44.533000 audit[5122]: USER_START pid=5122 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:44.535000 audit[5127]: CRED_ACQ pid=5127 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:44.668883 sshd[5122]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:44.669000 audit[5122]: USER_END pid=5122 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:44.669000 audit[5122]: CRED_DISP pid=5122 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:44.672039 systemd[1]: sshd@21-10.0.0.85:22-10.0.0.1:57646.service: Deactivated successfully. Jul 2 06:58:44.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.85:22-10.0.0.1:57646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:44.673122 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 06:58:44.673602 systemd-logind[1375]: Session 22 logged out. Waiting for processes to exit. Jul 2 06:58:44.674399 systemd-logind[1375]: Removed session 22. Jul 2 06:58:46.549282 kubelet[2418]: E0702 06:58:46.549238 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:49.680826 systemd[1]: Started sshd@22-10.0.0.85:22-10.0.0.1:57650.service - OpenSSH per-connection server daemon (10.0.0.1:57650). Jul 2 06:58:49.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:57650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:49.685551 kernel: kauditd_printk_skb: 57 callbacks suppressed Jul 2 06:58:49.685613 kernel: audit: type=1130 audit(1719903529.680:437): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:57650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:49.707000 audit[5141]: USER_ACCT pid=5141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:49.708128 sshd[5141]: Accepted publickey for core from 10.0.0.1 port 57650 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:49.709111 sshd[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:49.708000 audit[5141]: CRED_ACQ pid=5141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:49.713691 systemd-logind[1375]: New session 23 of user core. Jul 2 06:58:49.725855 kernel: audit: type=1101 audit(1719903529.707:438): pid=5141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:49.725907 kernel: audit: type=1103 audit(1719903529.708:439): pid=5141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:49.725930 kernel: audit: type=1006 audit(1719903529.708:440): pid=5141 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 2 06:58:49.725968 kernel: audit: type=1300 audit(1719903529.708:440): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef85594e0 a2=3 a3=7ffa6c220480 items=0 ppid=1 pid=5141 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:49.725988 kernel: audit: type=1327 audit(1719903529.708:440): proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:49.708000 audit[5141]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef85594e0 a2=3 a3=7ffa6c220480 items=0 ppid=1 pid=5141 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:49.708000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:49.725875 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 06:58:49.730000 audit[5141]: USER_START pid=5141 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:49.732000 audit[5144]: CRED_ACQ pid=5144 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:49.737081 kernel: audit: type=1105 audit(1719903529.730:441): pid=5141 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:49.737120 kernel: audit: type=1103 audit(1719903529.732:442): pid=5144 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:49.859916 sshd[5141]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:49.860000 audit[5141]: USER_END pid=5141 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:49.862901 systemd[1]: sshd@22-10.0.0.85:22-10.0.0.1:57650.service: Deactivated successfully. Jul 2 06:58:49.864227 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 06:58:49.864980 systemd-logind[1375]: Session 23 logged out. Waiting for processes to exit. Jul 2 06:58:49.861000 audit[5141]: CRED_DISP pid=5141 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:49.866703 systemd-logind[1375]: Removed session 23. Jul 2 06:58:49.873116 kernel: audit: type=1106 audit(1719903529.860:443): pid=5141 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:49.873204 kernel: audit: type=1104 audit(1719903529.861:444): pid=5141 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:49.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.85:22-10.0.0.1:57650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:51.384162 systemd[1]: run-containerd-runc-k8s.io-b0e9c0b6e26c0b74cc8eb12c2b61acc4385a393258fb98eccefb5d5c7ec407a0-runc.jVu7hK.mount: Deactivated successfully. Jul 2 06:58:51.485225 kubelet[2418]: E0702 06:58:51.484844 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:54.814000 audit[5207]: NETFILTER_CFG table=filter:117 family=2 entries=20 op=nft_register_rule pid=5207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:54.817277 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:58:54.817332 kernel: audit: type=1325 audit(1719903534.814:446): table=filter:117 family=2 entries=20 op=nft_register_rule pid=5207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:54.814000 audit[5207]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc8b783010 a2=0 a3=7ffc8b782ffc items=0 ppid=2608 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:54.825465 kernel: audit: type=1300 audit(1719903534.814:446): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc8b783010 a2=0 a3=7ffc8b782ffc items=0 ppid=2608 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:54.825573 kernel: audit: type=1327 audit(1719903534.814:446): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:54.814000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:54.816000 audit[5207]: NETFILTER_CFG table=nat:118 family=2 entries=104 op=nft_register_chain pid=5207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:54.816000 audit[5207]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc8b783010 a2=0 a3=7ffc8b782ffc items=0 ppid=2608 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:54.838994 kernel: audit: type=1325 audit(1719903534.816:447): table=nat:118 family=2 entries=104 op=nft_register_chain pid=5207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:54.839123 kernel: audit: type=1300 audit(1719903534.816:447): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc8b783010 a2=0 a3=7ffc8b782ffc items=0 ppid=2608 pid=5207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:54.839170 kernel: audit: type=1327 audit(1719903534.816:447): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:54.816000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:54.868857 systemd[1]: Started sshd@23-10.0.0.85:22-10.0.0.1:44104.service - OpenSSH per-connection server daemon (10.0.0.1:44104). Jul 2 06:58:54.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.85:22-10.0.0.1:44104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:54.873406 kernel: audit: type=1130 audit(1719903534.868:448): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.85:22-10.0.0.1:44104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:54.894000 audit[5210]: USER_ACCT pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:54.897571 sshd[5210]: Accepted publickey for core from 10.0.0.1 port 44104 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:58:54.899083 sshd[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:58:54.897000 audit[5210]: CRED_ACQ pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:54.902179 kernel: audit: type=1101 audit(1719903534.894:449): pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:54.902300 kernel: audit: type=1103 audit(1719903534.897:450): pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:54.902338 kernel: audit: type=1006 audit(1719903534.897:451): pid=5210 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 2 06:58:54.897000 audit[5210]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd5b2a450 a2=3 a3=7fce19ea0480 items=0 ppid=1 pid=5210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:54.897000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:58:54.906618 systemd-logind[1375]: New session 24 of user core. Jul 2 06:58:54.914787 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 06:58:54.919000 audit[5210]: USER_START pid=5210 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:54.921000 audit[5213]: CRED_ACQ pid=5213 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:55.030016 sshd[5210]: pam_unix(sshd:session): session closed for user core Jul 2 06:58:55.030000 audit[5210]: USER_END pid=5210 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:55.030000 audit[5210]: CRED_DISP pid=5210 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:58:55.033709 systemd-logind[1375]: Session 24 logged out. Waiting for processes to exit. Jul 2 06:58:55.034575 systemd[1]: sshd@23-10.0.0.85:22-10.0.0.1:44104.service: Deactivated successfully. Jul 2 06:58:55.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.85:22-10.0.0.1:44104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:58:55.035684 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 06:58:55.036747 systemd-logind[1375]: Removed session 24. Jul 2 06:58:55.074000 audit[5226]: NETFILTER_CFG table=filter:119 family=2 entries=9 op=nft_register_rule pid=5226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:55.074000 audit[5226]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe0261af80 a2=0 a3=7ffe0261af6c items=0 ppid=2608 pid=5226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:55.074000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:55.085062 kubelet[2418]: I0702 06:58:55.085026 2418 topology_manager.go:215] "Topology Admit Handler" podUID="0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4" podNamespace="calico-apiserver" podName="calico-apiserver-957ddf866-f67qb" Jul 2 06:58:55.084000 audit[5226]: NETFILTER_CFG table=nat:120 family=2 entries=44 op=nft_register_rule pid=5226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:55.084000 audit[5226]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffe0261af80 a2=0 a3=7ffe0261af6c items=0 ppid=2608 pid=5226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:55.084000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:55.091080 kubelet[2418]: W0702 06:58:55.091042 2418 reflector.go:535] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jul 2 06:58:55.091080 kubelet[2418]: E0702 06:58:55.091090 2418 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'localhost' and this object Jul 2 06:58:55.262844 kubelet[2418]: I0702 06:58:55.262707 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2mzf\" (UniqueName: \"kubernetes.io/projected/0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4-kube-api-access-h2mzf\") pod \"calico-apiserver-957ddf866-f67qb\" (UID: \"0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4\") " pod="calico-apiserver/calico-apiserver-957ddf866-f67qb" Jul 2 06:58:55.262844 kubelet[2418]: I0702 06:58:55.262780 2418 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4-calico-apiserver-certs\") pod \"calico-apiserver-957ddf866-f67qb\" (UID: \"0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4\") " pod="calico-apiserver/calico-apiserver-957ddf866-f67qb" Jul 2 06:58:56.093000 audit[5229]: NETFILTER_CFG table=filter:121 family=2 entries=10 op=nft_register_rule pid=5229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:56.093000 audit[5229]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdbf54c040 a2=0 a3=7ffdbf54c02c items=0 ppid=2608 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:56.093000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:56.094000 audit[5229]: NETFILTER_CFG table=nat:122 family=2 entries=44 op=nft_register_rule pid=5229 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:58:56.094000 audit[5229]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffdbf54c040 a2=0 a3=7ffdbf54c02c items=0 ppid=2608 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:56.094000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:58:56.364772 kubelet[2418]: E0702 06:58:56.364733 2418 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 2 06:58:56.373061 kubelet[2418]: E0702 06:58:56.373025 2418 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4-calico-apiserver-certs podName:0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4 nodeName:}" failed. No retries permitted until 2024-07-02 06:58:56.864808049 +0000 UTC m=+87.403026106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4-calico-apiserver-certs") pod "calico-apiserver-957ddf866-f67qb" (UID: "0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4") : failed to sync secret cache: timed out waiting for the condition Jul 2 06:58:56.888687 containerd[1393]: time="2024-07-02T06:58:56.888634214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-957ddf866-f67qb,Uid:0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4,Namespace:calico-apiserver,Attempt:0,}" Jul 2 06:58:57.209691 systemd-networkd[1177]: califcf789eb526: Link UP Jul 2 06:58:57.211679 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 06:58:57.211726 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califcf789eb526: link becomes ready Jul 2 06:58:57.211612 systemd-networkd[1177]: califcf789eb526: Gained carrier Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.046 [INFO][5231] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--957ddf866--f67qb-eth0 calico-apiserver-957ddf866- calico-apiserver 0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4 1143 0 2024-07-02 06:58:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:957ddf866 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-957ddf866-f67qb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califcf789eb526 [] []}} ContainerID="d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" Namespace="calico-apiserver" Pod="calico-apiserver-957ddf866-f67qb" WorkloadEndpoint="localhost-k8s-calico--apiserver--957ddf866--f67qb-" Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.046 [INFO][5231] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" Namespace="calico-apiserver" Pod="calico-apiserver-957ddf866-f67qb" WorkloadEndpoint="localhost-k8s-calico--apiserver--957ddf866--f67qb-eth0" Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.108 [INFO][5245] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" HandleID="k8s-pod-network.d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" Workload="localhost-k8s-calico--apiserver--957ddf866--f67qb-eth0" Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.176 [INFO][5245] ipam_plugin.go 264: Auto assigning IP ContainerID="d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" HandleID="k8s-pod-network.d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" Workload="localhost-k8s-calico--apiserver--957ddf866--f67qb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003823a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-957ddf866-f67qb", "timestamp":"2024-07-02 06:58:57.108175606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.177 [INFO][5245] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.177 [INFO][5245] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.177 [INFO][5245] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.179 [INFO][5245] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" host="localhost" Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.183 [INFO][5245] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.187 [INFO][5245] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.189 [INFO][5245] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.191 [INFO][5245] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.191 [INFO][5245] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" host="localhost" Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.192 [INFO][5245] ipam.go 1685: Creating new handle: k8s-pod-network.d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73 Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.195 [INFO][5245] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" host="localhost" Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.206 [INFO][5245] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" host="localhost" Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.206 [INFO][5245] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" host="localhost" Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.206 [INFO][5245] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:58:57.234607 containerd[1393]: 2024-07-02 06:58:57.206 [INFO][5245] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" HandleID="k8s-pod-network.d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" Workload="localhost-k8s-calico--apiserver--957ddf866--f67qb-eth0" Jul 2 06:58:57.235265 containerd[1393]: 2024-07-02 06:58:57.208 [INFO][5231] k8s.go 386: Populated endpoint ContainerID="d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" Namespace="calico-apiserver" Pod="calico-apiserver-957ddf866-f67qb" WorkloadEndpoint="localhost-k8s-calico--apiserver--957ddf866--f67qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--957ddf866--f67qb-eth0", GenerateName:"calico-apiserver-957ddf866-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"957ddf866", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-957ddf866-f67qb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcf789eb526", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:57.235265 containerd[1393]: 2024-07-02 06:58:57.208 [INFO][5231] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" Namespace="calico-apiserver" Pod="calico-apiserver-957ddf866-f67qb" WorkloadEndpoint="localhost-k8s-calico--apiserver--957ddf866--f67qb-eth0" Jul 2 06:58:57.235265 containerd[1393]: 2024-07-02 06:58:57.208 [INFO][5231] dataplane_linux.go 68: Setting the host side veth name to califcf789eb526 ContainerID="d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" Namespace="calico-apiserver" Pod="calico-apiserver-957ddf866-f67qb" WorkloadEndpoint="localhost-k8s-calico--apiserver--957ddf866--f67qb-eth0" Jul 2 06:58:57.235265 containerd[1393]: 2024-07-02 06:58:57.211 [INFO][5231] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" Namespace="calico-apiserver" Pod="calico-apiserver-957ddf866-f67qb" WorkloadEndpoint="localhost-k8s-calico--apiserver--957ddf866--f67qb-eth0" Jul 2 06:58:57.235265 containerd[1393]: 2024-07-02 06:58:57.212 [INFO][5231] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" Namespace="calico-apiserver" Pod="calico-apiserver-957ddf866-f67qb" WorkloadEndpoint="localhost-k8s-calico--apiserver--957ddf866--f67qb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--957ddf866--f67qb-eth0", GenerateName:"calico-apiserver-957ddf866-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 58, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"957ddf866", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73", Pod:"calico-apiserver-957ddf866-f67qb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califcf789eb526", MAC:"a6:bf:5a:6c:2a:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:58:57.235265 containerd[1393]: 2024-07-02 06:58:57.232 [INFO][5231] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73" Namespace="calico-apiserver" Pod="calico-apiserver-957ddf866-f67qb" WorkloadEndpoint="localhost-k8s-calico--apiserver--957ddf866--f67qb-eth0" Jul 2 06:58:57.244000 audit[5268]: NETFILTER_CFG table=filter:123 family=2 entries=55 op=nft_register_chain pid=5268 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:58:57.244000 audit[5268]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7fff1b59a600 a2=0 a3=7fff1b59a5ec items=0 ppid=3726 pid=5268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:58:57.244000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:58:57.276055 containerd[1393]: time="2024-07-02T06:58:57.275946850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:58:57.276267 containerd[1393]: time="2024-07-02T06:58:57.276028346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:58:57.276267 containerd[1393]: time="2024-07-02T06:58:57.276049185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:58:57.276267 containerd[1393]: time="2024-07-02T06:58:57.276062261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:58:57.301628 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 06:58:57.323702 containerd[1393]: time="2024-07-02T06:58:57.323647828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-957ddf866-f67qb,Uid:0d7ab2b8-17ab-42b2-82c3-519c76ddf9e4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73\"" Jul 2 06:58:57.325092 containerd[1393]: time="2024-07-02T06:58:57.325052573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 06:58:57.549462 kubelet[2418]: E0702 06:58:57.549338 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:58:58.563612 systemd-networkd[1177]: califcf789eb526: Gained IPv6LL Jul 2 06:59:00.044666 systemd[1]: Started sshd@24-10.0.0.85:22-10.0.0.1:44114.service - OpenSSH per-connection server daemon (10.0.0.1:44114). Jul 2 06:59:00.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.85:22-10.0.0.1:44114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:59:00.046906 kernel: kauditd_printk_skb: 22 callbacks suppressed Jul 2 06:59:00.046969 kernel: audit: type=1130 audit(1719903540.043:462): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.85:22-10.0.0.1:44114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:59:00.205000 audit[5314]: USER_ACCT pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:00.206781 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 44114 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:59:00.207761 sshd[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:59:00.206000 audit[5314]: CRED_ACQ pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:00.212879 kernel: audit: type=1101 audit(1719903540.205:463): pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:00.212937 kernel: audit: type=1103 audit(1719903540.206:464): pid=5314 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:00.212976 systemd-logind[1375]: New session 25 of user core. Jul 2 06:59:00.226438 kernel: audit: type=1006 audit(1719903540.206:465): pid=5314 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jul 2 06:59:00.226468 kernel: audit: type=1300 audit(1719903540.206:465): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe3dd80ca0 a2=3 a3=7f76b5827480 items=0 ppid=1 pid=5314 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:59:00.226496 kernel: audit: type=1327 audit(1719903540.206:465): proctitle=737368643A20636F7265205B707269765D Jul 2 06:59:00.206000 audit[5314]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe3dd80ca0 a2=3 a3=7f76b5827480 items=0 ppid=1 pid=5314 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:59:00.206000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:59:00.224603 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 06:59:00.226000 audit[5314]: USER_START pid=5314 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:00.228000 audit[5317]: CRED_ACQ pid=5317 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:00.234595 kernel: audit: type=1105 audit(1719903540.226:466): pid=5314 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:00.234631 kernel: audit: type=1103 audit(1719903540.228:467): pid=5317 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:00.343839 sshd[5314]: pam_unix(sshd:session): session closed for user core Jul 2 06:59:00.343000 audit[5314]: USER_END pid=5314 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:00.346439 systemd[1]: sshd@24-10.0.0.85:22-10.0.0.1:44114.service: Deactivated successfully. Jul 2 06:59:00.347341 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 06:59:00.347497 systemd-logind[1375]: Session 25 logged out. Waiting for processes to exit. Jul 2 06:59:00.348233 systemd-logind[1375]: Removed session 25. Jul 2 06:59:00.343000 audit[5314]: CRED_DISP pid=5314 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:00.376397 kernel: audit: type=1106 audit(1719903540.343:468): pid=5314 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:00.376454 kernel: audit: type=1104 audit(1719903540.343:469): pid=5314 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:00.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.85:22-10.0.0.1:44114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:59:01.113771 containerd[1393]: time="2024-07-02T06:59:01.113714105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:59:01.216382 containerd[1393]: time="2024-07-02T06:59:01.216295232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 06:59:01.266413 containerd[1393]: time="2024-07-02T06:59:01.266332998Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:59:01.289000 audit[5334]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=5334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:59:01.289000 audit[5334]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe6ccaa8b0 a2=0 a3=7ffe6ccaa89c items=0 ppid=2608 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:59:01.289000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:59:01.291000 audit[5334]: NETFILTER_CFG table=nat:125 family=2 entries=44 op=nft_register_rule pid=5334 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:59:01.291000 audit[5334]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffe6ccaa8b0 a2=0 a3=7ffe6ccaa89c items=0 ppid=2608 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:59:01.291000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:59:01.312808 containerd[1393]: time="2024-07-02T06:59:01.312745258Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:59:01.406120 containerd[1393]: time="2024-07-02T06:59:01.405976362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:59:01.406810 containerd[1393]: time="2024-07-02T06:59:01.406767947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.081669417s" Jul 2 06:59:01.406869 containerd[1393]: time="2024-07-02T06:59:01.406819395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 06:59:01.408311 containerd[1393]: time="2024-07-02T06:59:01.408284261Z" level=info msg="CreateContainer within sandbox \"d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 06:59:02.312927 containerd[1393]: time="2024-07-02T06:59:02.312869691Z" level=info msg="CreateContainer within sandbox \"d4b01c253679fec752a9d95d5c65bef346b47d3b483a0a60d3e789533f236d73\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c257a348f71bf033a43caa405b7db9744b107a89519104ac72e8a9947f85267e\"" Jul 2 06:59:02.313392 containerd[1393]: time="2024-07-02T06:59:02.313311281Z" level=info msg="StartContainer for \"c257a348f71bf033a43caa405b7db9744b107a89519104ac72e8a9947f85267e\"" Jul 2 06:59:03.016072 containerd[1393]: time="2024-07-02T06:59:03.016003668Z" level=info msg="StartContainer for \"c257a348f71bf033a43caa405b7db9744b107a89519104ac72e8a9947f85267e\" returns successfully" Jul 2 06:59:03.089768 kubelet[2418]: I0702 06:59:03.089737 2418 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-957ddf866-f67qb" podStartSLOduration=4.007396978 podCreationTimestamp="2024-07-02 06:58:55 +0000 UTC" firstStartedPulling="2024-07-02 06:58:57.324768102 +0000 UTC m=+87.862986159" lastFinishedPulling="2024-07-02 06:59:01.407070132 +0000 UTC m=+91.945288189" observedRunningTime="2024-07-02 06:59:03.089313667 +0000 UTC m=+93.627531714" watchObservedRunningTime="2024-07-02 06:59:03.089699008 +0000 UTC m=+93.627917065" Jul 2 06:59:03.163000 audit[5398]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:59:03.163000 audit[5398]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc1d78d3f0 a2=0 a3=7ffc1d78d3dc items=0 ppid=2608 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:59:03.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:59:03.165000 audit[5398]: NETFILTER_CFG table=nat:127 family=2 entries=44 op=nft_register_rule pid=5398 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:59:03.165000 audit[5398]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffc1d78d3f0 a2=0 a3=7ffc1d78d3dc items=0 ppid=2608 pid=5398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:59:03.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:59:03.307000 audit[5404]: NETFILTER_CFG table=filter:128 family=2 entries=9 op=nft_register_rule pid=5404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:59:03.307000 audit[5404]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffede78d5b0 a2=0 a3=7ffede78d59c items=0 ppid=2608 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:59:03.307000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:59:03.308000 audit[5404]: NETFILTER_CFG table=nat:129 family=2 entries=51 op=nft_register_chain pid=5404 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:59:03.308000 audit[5404]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7ffede78d5b0 a2=0 a3=7ffede78d59c items=0 ppid=2608 pid=5404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:59:03.308000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:59:03.549219 kubelet[2418]: E0702 06:59:03.549170 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:59:05.354869 systemd[1]: Started sshd@25-10.0.0.85:22-10.0.0.1:34232.service - OpenSSH per-connection server daemon (10.0.0.1:34232). Jul 2 06:59:05.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.85:22-10.0.0.1:34232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:59:05.356212 kernel: kauditd_printk_skb: 19 callbacks suppressed Jul 2 06:59:05.356272 kernel: audit: type=1130 audit(1719903545.353:477): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.85:22-10.0.0.1:34232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:59:05.385000 audit[5405]: USER_ACCT pid=5405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:05.387153 sshd[5405]: Accepted publickey for core from 10.0.0.1 port 34232 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:59:05.402398 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:59:05.386000 audit[5405]: CRED_ACQ pid=5405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:05.406974 systemd-logind[1375]: New session 26 of user core. Jul 2 06:59:05.408303 kernel: audit: type=1101 audit(1719903545.385:478): pid=5405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:05.408358 kernel: audit: type=1103 audit(1719903545.386:479): pid=5405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:05.408413 kernel: audit: type=1006 audit(1719903545.386:480): pid=5405 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jul 2 06:59:05.410170 kernel: audit: type=1300 audit(1719903545.386:480): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee5ef7870 a2=3 a3=7fa88e31c480 items=0 ppid=1 pid=5405 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:59:05.386000 audit[5405]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee5ef7870 a2=3 a3=7fa88e31c480 items=0 ppid=1 pid=5405 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:59:05.386000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:59:05.414582 kernel: audit: type=1327 audit(1719903545.386:480): proctitle=737368643A20636F7265205B707269765D Jul 2 06:59:05.416642 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 06:59:05.420000 audit[5405]: USER_START pid=5405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:05.421000 audit[5408]: CRED_ACQ pid=5408 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:05.428120 kernel: audit: type=1105 audit(1719903545.420:481): pid=5405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:05.428172 kernel: audit: type=1103 audit(1719903545.421:482): pid=5408 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:05.536561 sshd[5405]: pam_unix(sshd:session): session closed for user core Jul 2 06:59:05.536000 audit[5405]: USER_END pid=5405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:05.540067 systemd[1]: sshd@25-10.0.0.85:22-10.0.0.1:34232.service: Deactivated successfully. Jul 2 06:59:05.541111 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 06:59:05.536000 audit[5405]: CRED_DISP pid=5405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:05.543054 systemd-logind[1375]: Session 26 logged out. Waiting for processes to exit. Jul 2 06:59:05.544192 systemd-logind[1375]: Removed session 26. Jul 2 06:59:05.551084 kernel: audit: type=1106 audit(1719903545.536:483): pid=5405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:05.551226 kernel: audit: type=1104 audit(1719903545.536:484): pid=5405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:05.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.85:22-10.0.0.1:34232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:59:07.549821 kubelet[2418]: E0702 06:59:07.549780 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:59:07.550270 kubelet[2418]: E0702 06:59:07.550249 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:59:10.545810 systemd[1]: Started sshd@26-10.0.0.85:22-10.0.0.1:34248.service - OpenSSH per-connection server daemon (10.0.0.1:34248). Jul 2 06:59:10.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.85:22-10.0.0.1:34248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:59:10.546812 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:59:10.546868 kernel: audit: type=1130 audit(1719903550.544:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.85:22-10.0.0.1:34248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:59:10.571000 audit[5421]: USER_ACCT pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:10.572677 sshd[5421]: Accepted publickey for core from 10.0.0.1 port 34248 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 06:59:10.573882 sshd[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:59:10.572000 audit[5421]: CRED_ACQ pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:10.577410 systemd-logind[1375]: New session 27 of user core. Jul 2 06:59:10.579467 kernel: audit: type=1101 audit(1719903550.571:487): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:10.579513 kernel: audit: type=1103 audit(1719903550.572:488): pid=5421 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:10.579536 kernel: audit: type=1006 audit(1719903550.572:489): pid=5421 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jul 2 06:59:10.572000 audit[5421]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc55cdb800 a2=3 a3=7ff735e11480 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:59:10.584968 kernel: audit: type=1300 audit(1719903550.572:489): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc55cdb800 a2=3 a3=7ff735e11480 items=0 ppid=1 pid=5421 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:59:10.585009 kernel: audit: type=1327 audit(1719903550.572:489): proctitle=737368643A20636F7265205B707269765D Jul 2 06:59:10.572000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:59:10.597764 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 06:59:10.601000 audit[5421]: USER_START pid=5421 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:10.602000 audit[5424]: CRED_ACQ pid=5424 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:10.609363 kernel: audit: type=1105 audit(1719903550.601:490): pid=5421 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:10.609446 kernel: audit: type=1103 audit(1719903550.602:491): pid=5424 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:10.708977 sshd[5421]: pam_unix(sshd:session): session closed for user core Jul 2 06:59:10.708000 audit[5421]: USER_END pid=5421 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:10.712062 systemd-logind[1375]: Session 27 logged out. Waiting for processes to exit. Jul 2 06:59:10.712312 systemd[1]: sshd@26-10.0.0.85:22-10.0.0.1:34248.service: Deactivated successfully. Jul 2 06:59:10.713240 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 06:59:10.713778 systemd-logind[1375]: Removed session 27. Jul 2 06:59:10.708000 audit[5421]: CRED_DISP pid=5421 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:10.716910 kernel: audit: type=1106 audit(1719903550.708:492): pid=5421 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:10.716991 kernel: audit: type=1104 audit(1719903550.708:493): pid=5421 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:59:10.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.85:22-10.0.0.1:34248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'