Jul 2 07:00:17.794613 kernel: Linux version 6.1.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 23:29:55 -00 2024 Jul 2 07:00:17.794630 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 07:00:17.794639 kernel: BIOS-provided physical RAM map: Jul 2 07:00:17.794644 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 07:00:17.794649 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 2 07:00:17.794654 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 2 07:00:17.794660 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 2 07:00:17.794665 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 2 07:00:17.794669 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 2 07:00:17.794674 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 2 07:00:17.794680 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 2 07:00:17.794685 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 2 07:00:17.794690 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 2 07:00:17.794695 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 2 07:00:17.794701 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 2 07:00:17.794708 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 2 07:00:17.794713 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 2 07:00:17.794718 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 2 07:00:17.794723 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 2 07:00:17.794728 kernel: NX (Execute Disable) protection: active Jul 2 07:00:17.794733 kernel: efi: EFI v2.70 by EDK II Jul 2 07:00:17.794738 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b773018 Jul 2 07:00:17.794744 kernel: SMBIOS 2.8 present. Jul 2 07:00:17.794749 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 0.0.0 02/06/2015 Jul 2 07:00:17.794754 kernel: Hypervisor detected: KVM Jul 2 07:00:17.794759 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:00:17.794764 kernel: kvm-clock: using sched offset of 4241257846 cycles Jul 2 07:00:17.794771 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:00:17.794777 kernel: tsc: Detected 2794.748 MHz processor Jul 2 07:00:17.794783 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:00:17.794788 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:00:17.794794 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 2 07:00:17.794799 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:00:17.794804 kernel: Using GB pages for direct mapping Jul 2 07:00:17.794810 kernel: Secure boot disabled Jul 2 07:00:17.794816 kernel: ACPI: Early table checksum verification disabled Jul 2 07:00:17.794821 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 2 07:00:17.794827 kernel: ACPI: XSDT 0x000000009CB7D0E8 00004C (v01 BOCHS BXPC 00000001 01000013) Jul 2 07:00:17.794832 kernel: ACPI: FACP 0x000000009CB7A000 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:00:17.794838 kernel: ACPI: DSDT 0x000000009CB7B000 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:00:17.794845 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 2 07:00:17.794851 kernel: ACPI: APIC 0x000000009CB79000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:00:17.794858 kernel: ACPI: HPET 0x000000009CB78000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:00:17.794864 kernel: ACPI: WAET 0x000000009CB77000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 07:00:17.794870 kernel: ACPI: BGRT 0x000000009CB76000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 2 07:00:17.794876 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb7a000-0x9cb7a073] Jul 2 07:00:17.794881 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7b000-0x9cb7ca38] Jul 2 07:00:17.794887 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 2 07:00:17.794893 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb79000-0x9cb7908f] Jul 2 07:00:17.794900 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb78000-0x9cb78037] Jul 2 07:00:17.794905 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb77000-0x9cb77027] Jul 2 07:00:17.794911 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb76000-0x9cb76037] Jul 2 07:00:17.794917 kernel: No NUMA configuration found Jul 2 07:00:17.794922 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 2 07:00:17.794928 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 2 07:00:17.794934 kernel: Zone ranges: Jul 2 07:00:17.794940 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:00:17.794945 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 2 07:00:17.794951 kernel: Normal empty Jul 2 07:00:17.794958 kernel: Movable zone start for each node Jul 2 07:00:17.794964 kernel: Early memory node ranges Jul 2 07:00:17.794969 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 07:00:17.794975 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 2 07:00:17.794981 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 2 07:00:17.794987 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 2 07:00:17.794992 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 2 07:00:17.794998 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 2 07:00:17.795003 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 2 07:00:17.795011 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:00:17.795018 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 07:00:17.795024 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 2 07:00:17.795030 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:00:17.795036 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 2 07:00:17.795043 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 2 07:00:17.795051 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 2 07:00:17.795058 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 07:00:17.795065 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:00:17.795072 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:00:17.795081 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 07:00:17.795088 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:00:17.795095 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:00:17.795102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:00:17.795109 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:00:17.795117 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:00:17.795124 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 07:00:17.795131 kernel: TSC deadline timer available Jul 2 07:00:17.795138 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 07:00:17.795147 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 07:00:17.795154 kernel: kvm-guest: setup PV sched yield Jul 2 07:00:17.795161 kernel: [mem 0x9d000000-0xffffffff] available for PCI devices Jul 2 07:00:17.795168 kernel: Booting paravirtualized kernel on KVM Jul 2 07:00:17.795175 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:00:17.795183 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 2 07:00:17.795190 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u524288 Jul 2 07:00:17.795198 kernel: pcpu-alloc: s194792 r8192 d30488 u524288 alloc=1*2097152 Jul 2 07:00:17.795216 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 07:00:17.795226 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:00:17.795234 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:00:17.795242 kernel: Fallback order for Node 0: 0 Jul 2 07:00:17.795250 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 2 07:00:17.795258 kernel: Policy zone: DMA32 Jul 2 07:00:17.795267 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 07:00:17.795276 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:00:17.795284 kernel: random: crng init done Jul 2 07:00:17.795294 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:00:17.795302 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 07:00:17.795310 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:00:17.795319 kernel: Memory: 2392504K/2567000K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 174236K reserved, 0K cma-reserved) Jul 2 07:00:17.795327 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 07:00:17.795335 kernel: ftrace: allocating 36081 entries in 141 pages Jul 2 07:00:17.795344 kernel: ftrace: allocated 141 pages with 4 groups Jul 2 07:00:17.795352 kernel: Dynamic Preempt: voluntary Jul 2 07:00:17.795360 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 07:00:17.795370 kernel: rcu: RCU event tracing is enabled. Jul 2 07:00:17.795379 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 07:00:17.795400 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 07:00:17.795408 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:00:17.795417 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:00:17.795433 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:00:17.795441 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 07:00:17.795450 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 07:00:17.795458 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 07:00:17.795466 kernel: Console: colour dummy device 80x25 Jul 2 07:00:17.795475 kernel: printk: console [ttyS0] enabled Jul 2 07:00:17.795483 kernel: ACPI: Core revision 20220331 Jul 2 07:00:17.795493 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 07:00:17.795502 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:00:17.795511 kernel: x2apic enabled Jul 2 07:00:17.795519 kernel: Switched APIC routing to physical x2apic. Jul 2 07:00:17.795527 kernel: kvm-guest: setup PV IPIs Jul 2 07:00:17.795537 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 07:00:17.795546 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 07:00:17.795554 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 2 07:00:17.795563 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 07:00:17.795571 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 07:00:17.795580 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 07:00:17.795588 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:00:17.795596 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 07:00:17.795605 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:00:17.795615 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:00:17.795624 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 07:00:17.795632 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 07:00:17.795640 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:00:17.795649 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 07:00:17.795657 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:00:17.795666 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:00:17.795674 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:00:17.795683 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:00:17.795693 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 2 07:00:17.795701 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:00:17.795710 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:00:17.795718 kernel: LSM: Security Framework initializing Jul 2 07:00:17.795726 kernel: SELinux: Initializing. Jul 2 07:00:17.795735 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:00:17.795743 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 07:00:17.795752 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 07:00:17.795760 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 07:00:17.795770 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jul 2 07:00:17.795779 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 07:00:17.795787 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jul 2 07:00:17.795795 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 07:00:17.795804 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jul 2 07:00:17.795812 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 07:00:17.795820 kernel: ... version: 0 Jul 2 07:00:17.795828 kernel: ... bit width: 48 Jul 2 07:00:17.795837 kernel: ... generic registers: 6 Jul 2 07:00:17.795845 kernel: ... value mask: 0000ffffffffffff Jul 2 07:00:17.795855 kernel: ... max period: 00007fffffffffff Jul 2 07:00:17.795864 kernel: ... fixed-purpose events: 0 Jul 2 07:00:17.795872 kernel: ... event mask: 000000000000003f Jul 2 07:00:17.795880 kernel: signal: max sigframe size: 1776 Jul 2 07:00:17.795891 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:00:17.795901 kernel: rcu: Max phase no-delay instances is 400. Jul 2 07:00:17.795912 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:00:17.795922 kernel: x86: Booting SMP configuration: Jul 2 07:00:17.795933 kernel: .... node #0, CPUs: #1 #2 #3 Jul 2 07:00:17.795945 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 07:00:17.795956 kernel: smpboot: Max logical packages: 1 Jul 2 07:00:17.795966 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 2 07:00:17.795976 kernel: devtmpfs: initialized Jul 2 07:00:17.795987 kernel: x86/mm: Memory block size: 128MB Jul 2 07:00:17.795997 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 2 07:00:17.796008 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 2 07:00:17.796019 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 2 07:00:17.796029 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 2 07:00:17.796041 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 2 07:00:17.796049 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:00:17.796058 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 07:00:17.796066 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:00:17.796074 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:00:17.796083 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:00:17.796091 kernel: audit: type=2000 audit(1719903618.030:1): state=initialized audit_enabled=0 res=1 Jul 2 07:00:17.796099 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:00:17.796108 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:00:17.796118 kernel: cpuidle: using governor menu Jul 2 07:00:17.796127 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:00:17.796135 kernel: dca service started, version 1.12.1 Jul 2 07:00:17.796143 kernel: PCI: Using configuration type 1 for base access Jul 2 07:00:17.796152 kernel: PCI: Using configuration type 1 for extended access Jul 2 07:00:17.796160 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:00:17.796169 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:00:17.796177 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 07:00:17.796185 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:00:17.796195 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 07:00:17.796204 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:00:17.796221 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:00:17.796229 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:00:17.796238 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:00:17.796246 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 07:00:17.796254 kernel: ACPI: Interpreter enabled Jul 2 07:00:17.796262 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 07:00:17.796271 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:00:17.796281 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:00:17.796290 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 07:00:17.796298 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 07:00:17.796307 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:00:17.796451 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:00:17.796467 kernel: acpiphp: Slot [3] registered Jul 2 07:00:17.796475 kernel: acpiphp: Slot [4] registered Jul 2 07:00:17.796484 kernel: acpiphp: Slot [5] registered Jul 2 07:00:17.796494 kernel: acpiphp: Slot [6] registered Jul 2 07:00:17.796503 kernel: acpiphp: Slot [7] registered Jul 2 07:00:17.796511 kernel: acpiphp: Slot [8] registered Jul 2 07:00:17.796519 kernel: acpiphp: Slot [9] registered Jul 2 07:00:17.796527 kernel: acpiphp: Slot [10] registered Jul 2 07:00:17.796535 kernel: acpiphp: Slot [11] registered Jul 2 07:00:17.796544 kernel: acpiphp: Slot [12] registered Jul 2 07:00:17.796552 kernel: acpiphp: Slot [13] registered Jul 2 07:00:17.796560 kernel: acpiphp: Slot [14] registered Jul 2 07:00:17.796571 kernel: acpiphp: Slot [15] registered Jul 2 07:00:17.796580 kernel: acpiphp: Slot [16] registered Jul 2 07:00:17.796588 kernel: acpiphp: Slot [17] registered Jul 2 07:00:17.796596 kernel: acpiphp: Slot [18] registered Jul 2 07:00:17.796604 kernel: acpiphp: Slot [19] registered Jul 2 07:00:17.796613 kernel: acpiphp: Slot [20] registered Jul 2 07:00:17.796621 kernel: acpiphp: Slot [21] registered Jul 2 07:00:17.796629 kernel: acpiphp: Slot [22] registered Jul 2 07:00:17.796638 kernel: acpiphp: Slot [23] registered Jul 2 07:00:17.796646 kernel: acpiphp: Slot [24] registered Jul 2 07:00:17.796656 kernel: acpiphp: Slot [25] registered Jul 2 07:00:17.796664 kernel: acpiphp: Slot [26] registered Jul 2 07:00:17.796673 kernel: acpiphp: Slot [27] registered Jul 2 07:00:17.796681 kernel: acpiphp: Slot [28] registered Jul 2 07:00:17.796689 kernel: acpiphp: Slot [29] registered Jul 2 07:00:17.796697 kernel: acpiphp: Slot [30] registered Jul 2 07:00:17.796705 kernel: acpiphp: Slot [31] registered Jul 2 07:00:17.796714 kernel: PCI host bridge to bus 0000:00 Jul 2 07:00:17.796811 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:00:17.796896 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:00:17.796980 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:00:17.797076 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 07:00:17.797174 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0x87fffffff window] Jul 2 07:00:17.797267 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:00:17.797372 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:00:17.797491 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 07:00:17.797592 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 07:00:17.797684 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 07:00:17.797772 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 07:00:17.797861 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 07:00:17.797951 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 07:00:17.798042 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 07:00:17.798153 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:00:17.798230 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 07:00:17.798295 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 07:00:17.798366 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 07:00:17.798452 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 2 07:00:17.798518 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xc1043000-0xc1043fff] Jul 2 07:00:17.798583 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 2 07:00:17.798650 kernel: pci 0000:00:02.0: BAR 0: assigned to efifb Jul 2 07:00:17.798715 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 07:00:17.798787 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 07:00:17.798854 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 07:00:17.798930 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 2 07:00:17.799009 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 2 07:00:17.799088 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 07:00:17.799154 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 07:00:17.799229 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 2 07:00:17.799295 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 2 07:00:17.799365 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:00:17.799454 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 07:00:17.799521 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc1040000-0xc1040fff] Jul 2 07:00:17.799596 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 2 07:00:17.799665 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 2 07:00:17.799676 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:00:17.799685 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:00:17.799694 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:00:17.799703 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:00:17.799711 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:00:17.799720 kernel: iommu: Default domain type: Translated Jul 2 07:00:17.799728 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:00:17.799740 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:00:17.799749 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it> Jul 2 07:00:17.799758 kernel: PTP clock support registered Jul 2 07:00:17.799766 kernel: Registered efivars operations Jul 2 07:00:17.799774 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:00:17.799783 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:00:17.799791 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 2 07:00:17.799799 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 2 07:00:17.799807 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 2 07:00:17.799818 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 2 07:00:17.799923 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 07:00:17.800025 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 07:00:17.800137 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 07:00:17.800153 kernel: vgaarb: loaded Jul 2 07:00:17.800164 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 07:00:17.800174 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 07:00:17.800185 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:00:17.800195 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:00:17.800218 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:00:17.800227 kernel: pnp: PnP ACPI init Jul 2 07:00:17.800321 kernel: pnp 00:02: [dma 2] Jul 2 07:00:17.800335 kernel: pnp: PnP ACPI: found 6 devices Jul 2 07:00:17.800343 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:00:17.800352 kernel: NET: Registered PF_INET protocol family Jul 2 07:00:17.800360 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:00:17.800369 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 07:00:17.800380 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:00:17.800454 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 07:00:17.800463 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 07:00:17.800471 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 07:00:17.800480 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:00:17.800488 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 07:00:17.800497 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:00:17.800505 kernel: NET: Registered PF_XDP protocol family Jul 2 07:00:17.800599 kernel: pci 0000:00:05.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 2 07:00:17.800693 kernel: pci 0000:00:05.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 2 07:00:17.800773 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:00:17.800851 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:00:17.800929 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:00:17.801006 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 07:00:17.801085 kernel: pci_bus 0000:00: resource 8 [mem 0x800000000-0x87fffffff window] Jul 2 07:00:17.801174 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 07:00:17.801279 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:00:17.801292 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:00:17.801301 kernel: Initialise system trusted keyrings Jul 2 07:00:17.801310 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 07:00:17.801318 kernel: Key type asymmetric registered Jul 2 07:00:17.801327 kernel: Asymmetric key parser 'x509' registered Jul 2 07:00:17.801335 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jul 2 07:00:17.801343 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:00:17.801352 kernel: io scheduler mq-deadline registered Jul 2 07:00:17.801363 kernel: io scheduler kyber registered Jul 2 07:00:17.801371 kernel: io scheduler bfq registered Jul 2 07:00:17.801379 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:00:17.801400 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:00:17.801409 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 07:00:17.801418 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:00:17.801426 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:00:17.801434 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:00:17.801443 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:00:17.801454 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:00:17.801463 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:00:17.801575 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 07:00:17.801591 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:00:17.801670 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 07:00:17.801752 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T07:00:17 UTC (1719903617) Jul 2 07:00:17.801832 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 07:00:17.801848 kernel: efifb: probing for efifb Jul 2 07:00:17.801857 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 2 07:00:17.801865 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 2 07:00:17.801874 kernel: efifb: scrolling: redraw Jul 2 07:00:17.801883 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 2 07:00:17.801892 kernel: Console: switching to colour frame buffer device 100x37 Jul 2 07:00:17.801903 kernel: fb0: EFI VGA frame buffer device Jul 2 07:00:17.801914 kernel: pstore: Registered efi as persistent store backend Jul 2 07:00:17.801925 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:00:17.801936 kernel: Segment Routing with IPv6 Jul 2 07:00:17.801949 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:00:17.801960 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:00:17.801971 kernel: Key type dns_resolver registered Jul 2 07:00:17.801981 kernel: IPI shorthand broadcast: enabled Jul 2 07:00:17.801992 kernel: sched_clock: Marking stable (528003762, 111753259)->(660565406, -20808385) Jul 2 07:00:17.802004 kernel: registered taskstats version 1 Jul 2 07:00:17.802017 kernel: Loading compiled-in X.509 certificates Jul 2 07:00:17.802028 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.96-flatcar: ad4c54fcfdf0a10b17828c4377e868762dc43797' Jul 2 07:00:17.802038 kernel: Key type .fscrypt registered Jul 2 07:00:17.802049 kernel: Key type fscrypt-provisioning registered Jul 2 07:00:17.802057 kernel: pstore: Using crash dump compression: deflate Jul 2 07:00:17.802066 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 07:00:17.802075 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:00:17.802084 kernel: ima: No architecture policies found Jul 2 07:00:17.802094 kernel: clk: Disabling unused clocks Jul 2 07:00:17.802103 kernel: Freeing unused kernel image (initmem) memory: 47156K Jul 2 07:00:17.802112 kernel: Write protecting the kernel read-only data: 34816k Jul 2 07:00:17.802122 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:00:17.802131 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jul 2 07:00:17.802140 kernel: Run /init as init process Jul 2 07:00:17.802149 kernel: with arguments: Jul 2 07:00:17.802157 kernel: /init Jul 2 07:00:17.802166 kernel: with environment: Jul 2 07:00:17.802176 kernel: HOME=/ Jul 2 07:00:17.802184 kernel: TERM=linux Jul 2 07:00:17.802193 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:00:17.802213 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:00:17.802225 systemd[1]: Detected virtualization kvm. Jul 2 07:00:17.802235 systemd[1]: Detected architecture x86-64. Jul 2 07:00:17.802244 systemd[1]: Running in initrd. Jul 2 07:00:17.802255 systemd[1]: No hostname configured, using default hostname. Jul 2 07:00:17.802265 systemd[1]: Hostname set to <localhost>. Jul 2 07:00:17.802274 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:00:17.802284 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:00:17.802293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 07:00:17.802302 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 07:00:17.802312 systemd[1]: Reached target paths.target - Path Units. Jul 2 07:00:17.802321 systemd[1]: Reached target slices.target - Slice Units. Jul 2 07:00:17.802330 systemd[1]: Reached target swap.target - Swaps. Jul 2 07:00:17.802341 systemd[1]: Reached target timers.target - Timer Units. Jul 2 07:00:17.802351 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 07:00:17.802361 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 07:00:17.802370 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jul 2 07:00:17.802380 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 07:00:17.802402 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 07:00:17.802411 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 07:00:17.802423 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 07:00:17.802432 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 07:00:17.802441 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 07:00:17.802451 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 07:00:17.802460 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 07:00:17.802470 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:00:17.802480 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 07:00:17.802490 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 07:00:17.802502 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jul 2 07:00:17.802512 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 07:00:17.802521 kernel: audit: type=1130 audit(1719903617.794:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.802531 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:00:17.802539 kernel: audit: type=1130 audit(1719903617.799:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.802546 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 07:00:17.802557 systemd-journald[195]: Journal started Jul 2 07:00:17.802592 systemd-journald[195]: Runtime Journal (/run/log/journal/efcef03105454b9a97a8d3afa09c3dfb) is 6.0M, max 48.3M, 42.3M free. Jul 2 07:00:17.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.807403 kernel: audit: type=1130 audit(1719903617.804:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.807423 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 07:00:17.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.811767 systemd-modules-load[196]: Inserted module 'overlay' Jul 2 07:00:17.821586 kernel: audit: type=1130 audit(1719903617.808:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.821521 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 07:00:17.822188 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 07:00:17.823174 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 07:00:17.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.834156 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 07:00:17.841284 kernel: audit: type=1130 audit(1719903617.833:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.841305 kernel: audit: type=1130 audit(1719903617.836:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.841316 kernel: audit: type=1334 audit(1719903617.837:8): prog-id=6 op=LOAD Jul 2 07:00:17.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.837000 audit: BPF prog-id=6 op=LOAD Jul 2 07:00:17.836994 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 07:00:17.838061 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 07:00:17.843659 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 07:00:17.849611 kernel: audit: type=1130 audit(1719903617.845:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.846166 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 07:00:17.854744 dracut-cmdline[216]: dracut-dracut-053 Jul 2 07:00:17.856560 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 07:00:17.873416 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:00:17.875692 systemd-modules-load[196]: Inserted module 'br_netfilter' Jul 2 07:00:17.876808 kernel: Bridge firewalling registered Jul 2 07:00:17.881936 systemd-resolved[215]: Positive Trust Anchors: Jul 2 07:00:17.881952 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:00:17.881993 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:00:17.884904 systemd-resolved[215]: Defaulting to hostname 'linux'. Jul 2 07:00:17.895000 kernel: audit: type=1130 audit(1719903617.891:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.885830 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 07:00:17.891840 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 07:00:17.915406 kernel: SCSI subsystem initialized Jul 2 07:00:17.927416 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:00:17.933556 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:00:17.933596 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:00:17.933608 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jul 2 07:00:17.937514 systemd-modules-load[196]: Inserted module 'dm_multipath' Jul 2 07:00:17.939171 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 07:00:17.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.943408 kernel: iscsi: registered transport (tcp) Jul 2 07:00:17.945524 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 07:00:17.952043 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 07:00:17.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:17.968407 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:00:17.968429 kernel: QLogic iSCSI HBA Driver Jul 2 07:00:17.992546 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 07:00:17.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:18.003525 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 07:00:18.062412 kernel: raid6: avx2x4 gen() 29551 MB/s Jul 2 07:00:18.079409 kernel: raid6: avx2x2 gen() 30909 MB/s Jul 2 07:00:18.113404 kernel: raid6: avx2x1 gen() 26003 MB/s Jul 2 07:00:18.113422 kernel: raid6: using algorithm avx2x2 gen() 30909 MB/s Jul 2 07:00:18.130493 kernel: raid6: .... xor() 19042 MB/s, rmw enabled Jul 2 07:00:18.130512 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:00:18.134404 kernel: xor: automatically using best checksumming function avx Jul 2 07:00:18.269416 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:00:18.278633 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 07:00:18.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:18.278000 audit: BPF prog-id=7 op=LOAD Jul 2 07:00:18.278000 audit: BPF prog-id=8 op=LOAD Jul 2 07:00:18.287574 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 07:00:18.298900 systemd-udevd[398]: Using default interface naming scheme 'v252'. Jul 2 07:00:18.302693 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 07:00:18.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:18.304706 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 07:00:18.315094 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Jul 2 07:00:18.338709 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 07:00:18.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:18.351555 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 07:00:18.385469 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 07:00:18.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:18.410439 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 2 07:00:18.417297 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 07:00:18.417520 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:00:18.417533 kernel: GPT:9289727 != 19775487 Jul 2 07:00:18.417541 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:00:18.417548 kernel: GPT:9289727 != 19775487 Jul 2 07:00:18.417555 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:00:18.417563 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:00:18.423884 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:00:18.437197 kernel: libata version 3.00 loaded. Jul 2 07:00:18.443263 kernel: BTRFS: device fsid 1fca1e64-eeea-4360-9664-a9b6b3a60b6f devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (456) Jul 2 07:00:18.443293 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 07:00:18.459908 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (459) Jul 2 07:00:18.459922 kernel: scsi host0: ata_piix Jul 2 07:00:18.460020 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:00:18.460030 kernel: AES CTR mode by8 optimization enabled Jul 2 07:00:18.460038 kernel: scsi host1: ata_piix Jul 2 07:00:18.460125 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 07:00:18.460138 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 07:00:18.443162 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 07:00:18.451921 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 07:00:18.456606 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 07:00:18.469602 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 07:00:18.469668 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 07:00:18.481668 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 07:00:18.487871 disk-uuid[526]: Primary Header is updated. Jul 2 07:00:18.487871 disk-uuid[526]: Secondary Entries is updated. Jul 2 07:00:18.487871 disk-uuid[526]: Secondary Header is updated. Jul 2 07:00:18.492419 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:00:18.495424 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:00:18.613524 kernel: ata2: found unknown device (class 0) Jul 2 07:00:18.615417 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 07:00:18.617406 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 07:00:18.686791 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 07:00:18.710358 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 07:00:18.710370 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 07:00:19.497968 disk-uuid[527]: The operation has completed successfully. Jul 2 07:00:19.499944 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 07:00:19.525541 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:00:19.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.525626 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 07:00:19.540546 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 07:00:19.545281 sh[555]: Success Jul 2 07:00:19.556414 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 07:00:19.583874 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 07:00:19.602593 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 07:00:19.605418 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 07:00:19.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.611547 kernel: BTRFS info (device dm-0): first mount of filesystem 1fca1e64-eeea-4360-9664-a9b6b3a60b6f Jul 2 07:00:19.611579 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:00:19.611591 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 07:00:19.612587 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 07:00:19.613947 kernel: BTRFS info (device dm-0): using free space tree Jul 2 07:00:19.618108 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 07:00:19.620632 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 07:00:19.641554 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 07:00:19.644971 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 07:00:19.651419 kernel: BTRFS info (device vda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:00:19.651456 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:00:19.651476 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:00:19.660181 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:00:19.662276 kernel: BTRFS info (device vda6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:00:19.668988 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 07:00:19.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.674611 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 07:00:19.709882 ignition[658]: Ignition 2.15.0 Jul 2 07:00:19.709892 ignition[658]: Stage: fetch-offline Jul 2 07:00:19.709923 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:00:19.709931 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:00:19.710011 ignition[658]: parsed url from cmdline: "" Jul 2 07:00:19.710014 ignition[658]: no config URL provided Jul 2 07:00:19.710019 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:00:19.710026 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:00:19.710047 ignition[658]: op(1): [started] loading QEMU firmware config module Jul 2 07:00:19.710052 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 07:00:19.721975 ignition[658]: op(1): [finished] loading QEMU firmware config module Jul 2 07:00:19.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.737000 audit: BPF prog-id=9 op=LOAD Jul 2 07:00:19.735632 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 07:00:19.743596 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 07:00:19.766490 ignition[658]: parsing config with SHA512: 55fc0d098ac5ad2db3cd66f2dd41fb9c9cdd3021e95a77a34c911ebc1aded16bb3566a06c4ccc54692d929bab92cc3b6779154f7d5c799763b26e1105d72b76b Jul 2 07:00:19.769692 unknown[658]: fetched base config from "system" Jul 2 07:00:19.769820 unknown[658]: fetched user config from "qemu" Jul 2 07:00:19.771558 ignition[658]: fetch-offline: fetch-offline passed Jul 2 07:00:19.771645 ignition[658]: Ignition finished successfully Jul 2 07:00:19.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.772777 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 07:00:19.777947 systemd-networkd[745]: lo: Link UP Jul 2 07:00:19.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.777955 systemd-networkd[745]: lo: Gained carrier Jul 2 07:00:19.778339 systemd-networkd[745]: Enumeration completed Jul 2 07:00:19.778432 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 07:00:19.778544 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 07:00:19.778547 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:00:19.779306 systemd-networkd[745]: eth0: Link UP Jul 2 07:00:19.779309 systemd-networkd[745]: eth0: Gained carrier Jul 2 07:00:19.779314 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 07:00:19.780816 systemd[1]: Reached target network.target - Network. Jul 2 07:00:19.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.782032 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 07:00:19.804483 iscsid[757]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:00:19.804483 iscsid[757]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 07:00:19.804483 iscsid[757]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier]. Jul 2 07:00:19.804483 iscsid[757]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:00:19.804483 iscsid[757]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:00:19.804483 iscsid[757]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:00:19.804483 iscsid[757]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:00:19.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.789509 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 07:00:19.802764 ignition[747]: Ignition 2.15.0 Jul 2 07:00:19.792301 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 07:00:19.802769 ignition[747]: Stage: kargs Jul 2 07:00:19.796938 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 07:00:19.802846 ignition[747]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:00:19.798527 systemd-networkd[745]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:00:19.802854 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:00:19.800317 systemd[1]: Starting iscsid.service - Open-iSCSI... Jul 2 07:00:19.803688 ignition[747]: kargs: kargs passed Jul 2 07:00:19.804632 systemd[1]: Started iscsid.service - Open-iSCSI. Jul 2 07:00:19.803718 ignition[747]: Ignition finished successfully Jul 2 07:00:19.823598 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 07:00:19.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.853594 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 07:00:19.857360 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 07:00:19.865032 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 07:00:19.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.866201 ignition[759]: Ignition 2.15.0 Jul 2 07:00:19.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.866207 ignition[759]: Stage: disks Jul 2 07:00:19.868182 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 07:00:19.866292 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:00:19.871199 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 07:00:19.866299 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:00:19.873957 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 07:00:19.867106 ignition[759]: disks: disks passed Jul 2 07:00:19.876590 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 07:00:19.867139 ignition[759]: Ignition finished successfully Jul 2 07:00:19.885187 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 07:00:19.888293 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 07:00:19.891257 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 07:00:19.893992 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 07:00:19.896755 systemd[1]: Reached target basic.target - Basic System. Jul 2 07:00:19.908562 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 07:00:19.916196 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 07:00:19.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.920056 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 07:00:19.930434 systemd-fsck[781]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 07:00:19.936025 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 07:00:19.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:19.949536 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 07:00:20.021405 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jul 2 07:00:20.021592 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 07:00:20.023288 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 07:00:20.038505 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 07:00:20.041587 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 07:00:20.044640 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (788) Jul 2 07:00:20.044966 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 07:00:20.049667 kernel: BTRFS info (device vda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:00:20.049680 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:00:20.049689 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:00:20.045010 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:00:20.045033 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 07:00:20.055695 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 07:00:20.057748 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 07:00:20.071551 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 07:00:20.099421 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:00:20.103223 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:00:20.107034 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:00:20.111009 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:00:20.167118 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 07:00:20.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:20.183488 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 07:00:20.186093 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 07:00:20.188598 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 07:00:20.190640 kernel: BTRFS info (device vda6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:00:20.205068 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 07:00:20.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:20.208992 ignition[900]: INFO : Ignition 2.15.0 Jul 2 07:00:20.208992 ignition[900]: INFO : Stage: mount Jul 2 07:00:20.210591 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:00:20.210591 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:00:20.214051 ignition[900]: INFO : mount: mount passed Jul 2 07:00:20.214830 ignition[900]: INFO : Ignition finished successfully Jul 2 07:00:20.216316 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 07:00:20.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:20.227562 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 07:00:20.234157 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 07:00:20.239416 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (911) Jul 2 07:00:20.241562 kernel: BTRFS info (device vda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:00:20.241587 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:00:20.241596 kernel: BTRFS info (device vda6): using free space tree Jul 2 07:00:20.245644 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 07:00:20.277131 ignition[929]: INFO : Ignition 2.15.0 Jul 2 07:00:20.277131 ignition[929]: INFO : Stage: files Jul 2 07:00:20.279161 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:00:20.279161 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:00:20.279161 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:00:20.283006 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:00:20.283006 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:00:20.286737 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:00:20.288530 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:00:20.290027 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:00:20.288885 unknown[929]: wrote ssh authorized keys file for user: core Jul 2 07:00:20.292920 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:00:20.295197 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:00:20.324602 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 07:00:20.395246 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:00:20.395246 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:00:20.399372 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:00:20.399372 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:00:20.399372 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:00:20.399372 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:00:20.399372 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:00:20.399372 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:00:20.399372 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:00:20.399372 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:00:20.399372 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:00:20.399372 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:00:20.399372 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:00:20.399372 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:00:20.399372 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 07:00:20.858014 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 07:00:21.242845 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:00:21.242845 ignition[929]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 07:00:21.246768 ignition[929]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:00:21.246768 ignition[929]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:00:21.246768 ignition[929]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 07:00:21.246768 ignition[929]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 2 07:00:21.246768 ignition[929]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:00:21.246768 ignition[929]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 07:00:21.246768 ignition[929]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 2 07:00:21.246768 ignition[929]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 07:00:21.246768 ignition[929]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:00:21.263877 ignition[929]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 07:00:21.265533 ignition[929]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 07:00:21.265533 ignition[929]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:00:21.265533 ignition[929]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:00:21.265533 ignition[929]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:00:21.265533 ignition[929]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:00:21.265533 ignition[929]: INFO : files: files passed Jul 2 07:00:21.265533 ignition[929]: INFO : Ignition finished successfully Jul 2 07:00:21.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.265644 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 07:00:21.286510 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 07:00:21.288928 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 07:00:21.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.290765 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:00:21.290827 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 07:00:21.295575 initrd-setup-root-after-ignition[954]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 07:00:21.297973 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:00:21.297973 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:00:21.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.295727 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 07:00:21.303346 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:00:21.298027 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 07:00:21.307496 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 07:00:21.318927 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:00:21.318996 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 07:00:21.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.321031 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 07:00:21.323176 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 07:00:21.324223 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 07:00:21.324890 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 07:00:21.334023 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 07:00:21.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.341522 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 07:00:21.348381 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 07:00:21.350607 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 07:00:21.352821 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 07:00:21.354620 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:00:21.355600 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 07:00:21.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.357919 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 07:00:21.359934 systemd[1]: Stopped target basic.target - Basic System. Jul 2 07:00:21.361728 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 07:00:21.363871 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 07:00:21.366079 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 07:00:21.368341 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 07:00:21.370378 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 07:00:21.372761 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 07:00:21.374825 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 07:00:21.376815 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jul 2 07:00:21.379103 systemd[1]: Stopped target swap.target - Swaps. Jul 2 07:00:21.380735 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:00:21.381709 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 07:00:21.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.383858 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 07:00:21.385959 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:00:21.386957 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 07:00:21.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.389136 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:00:21.392575 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 07:00:21.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.394878 systemd[1]: Stopped target paths.target - Path Units. Jul 2 07:00:21.396657 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:00:21.401486 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 07:00:21.404033 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 07:00:21.405842 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 07:00:21.407676 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:00:21.408524 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 07:00:21.410567 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:00:21.411776 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 07:00:21.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.414171 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:00:21.415228 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 07:00:21.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.428602 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 07:00:21.431012 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 07:00:21.432192 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:00:21.433335 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 07:00:21.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.437463 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 07:00:21.439306 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:00:21.440374 ignition[974]: INFO : Ignition 2.15.0 Jul 2 07:00:21.440374 ignition[974]: INFO : Stage: umount Jul 2 07:00:21.440374 ignition[974]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:00:21.440374 ignition[974]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 07:00:21.440374 ignition[974]: INFO : umount: umount passed Jul 2 07:00:21.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.440403 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 07:00:21.447418 ignition[974]: INFO : Ignition finished successfully Jul 2 07:00:21.446176 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:00:21.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.446270 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 07:00:21.453946 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:00:21.455435 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:00:21.456419 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 07:00:21.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.459012 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:00:21.460018 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 07:00:21.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.462448 systemd[1]: Stopped target network.target - Network. Jul 2 07:00:21.464282 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:00:21.464319 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 07:00:21.467535 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:00:21.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.467574 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 07:00:21.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.469977 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:00:21.470020 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 07:00:21.472369 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:00:21.472423 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 07:00:21.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.477425 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 07:00:21.479640 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 07:00:21.481947 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:00:21.482902 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 07:00:21.484000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.486458 systemd-networkd[745]: eth0: DHCPv6 lease lost Jul 2 07:00:21.487534 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:00:21.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.487643 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 07:00:21.489724 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:00:21.489770 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 07:00:21.499000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:00:21.499512 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 07:00:21.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.500627 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:00:21.500676 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 07:00:21.502078 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:00:21.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.502124 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 07:00:21.504510 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:00:21.504545 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 07:00:21.505782 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 07:00:21.507689 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:00:21.515000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:00:21.508160 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:00:21.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.508251 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 07:00:21.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.516830 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:00:21.516953 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 07:00:21.518904 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:00:21.518976 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 07:00:21.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.520928 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:00:21.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.521001 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 07:00:21.523541 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:00:21.523573 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 07:00:21.525440 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:00:21.525467 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 07:00:21.527770 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:00:21.527805 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 07:00:21.529944 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:00:21.529975 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 07:00:21.532165 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:00:21.532197 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 07:00:21.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.532289 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:00:21.532316 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 07:00:21.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.544606 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 07:00:21.546610 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 07:00:21.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:21.546675 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 07:00:21.549085 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:00:21.549133 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 07:00:21.551955 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 2 07:00:21.552471 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:00:21.552565 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 07:00:21.554536 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 07:00:21.565508 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 07:00:21.570934 systemd[1]: Switching root. Jul 2 07:00:21.583273 iscsid[757]: iscsid shutting down. Jul 2 07:00:21.584142 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jul 2 07:00:21.584183 systemd-journald[195]: Journal stopped Jul 2 07:00:22.564770 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jul 2 07:00:22.564819 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:00:22.564836 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:00:22.564845 kernel: SELinux: policy capability open_perms=1 Jul 2 07:00:22.564857 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:00:22.564866 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:00:22.564874 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:00:22.564886 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:00:22.564895 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:00:22.564904 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:00:22.564913 systemd[1]: Successfully loaded SELinux policy in 38.261ms. Jul 2 07:00:22.564931 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.223ms. Jul 2 07:00:22.564942 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:00:22.564956 systemd[1]: Detected virtualization kvm. Jul 2 07:00:22.564966 systemd[1]: Detected architecture x86-64. Jul 2 07:00:22.564977 systemd[1]: Detected first boot. Jul 2 07:00:22.564991 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:00:22.565001 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:00:22.565012 kernel: kauditd_printk_skb: 73 callbacks suppressed Jul 2 07:00:22.565021 kernel: audit: type=1334 audit(1719903622.364:84): prog-id=12 op=LOAD Jul 2 07:00:22.565030 kernel: audit: type=1334 audit(1719903622.364:85): prog-id=3 op=UNLOAD Jul 2 07:00:22.565038 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:00:22.565048 kernel: audit: type=1334 audit(1719903622.364:86): prog-id=13 op=LOAD Jul 2 07:00:22.565057 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jul 2 07:00:22.565067 kernel: audit: type=1334 audit(1719903622.364:87): prog-id=14 op=LOAD Jul 2 07:00:22.565083 kernel: audit: type=1334 audit(1719903622.365:88): prog-id=4 op=UNLOAD Jul 2 07:00:22.565093 kernel: audit: type=1334 audit(1719903622.365:89): prog-id=5 op=UNLOAD Jul 2 07:00:22.565102 kernel: audit: type=1334 audit(1719903622.365:90): prog-id=15 op=LOAD Jul 2 07:00:22.565111 kernel: audit: type=1334 audit(1719903622.365:91): prog-id=12 op=UNLOAD Jul 2 07:00:22.565119 kernel: audit: type=1334 audit(1719903622.366:92): prog-id=16 op=LOAD Jul 2 07:00:22.565127 kernel: audit: type=1334 audit(1719903622.366:93): prog-id=17 op=LOAD Jul 2 07:00:22.565136 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:00:22.565147 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 07:00:22.565156 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:00:22.565170 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 07:00:22.565180 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 07:00:22.565189 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 07:00:22.565199 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 07:00:22.565208 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 07:00:22.565218 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 07:00:22.565229 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 07:00:22.565239 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 07:00:22.565248 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 07:00:22.565258 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 07:00:22.565267 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 07:00:22.565276 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 07:00:22.565285 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 07:00:22.565296 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 07:00:22.565306 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 07:00:22.565319 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 07:00:22.565329 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 07:00:22.565338 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 07:00:22.565348 systemd[1]: Reached target slices.target - Slice Units. Jul 2 07:00:22.565357 systemd[1]: Reached target swap.target - Swaps. Jul 2 07:00:22.565366 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 07:00:22.565376 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 07:00:22.565401 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jul 2 07:00:22.565414 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 07:00:22.565423 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 07:00:22.565432 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 07:00:22.565443 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 07:00:22.565452 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 07:00:22.565461 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 07:00:22.565470 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 07:00:22.565479 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:00:22.565490 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 07:00:22.565500 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 07:00:22.565510 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 07:00:22.565519 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 07:00:22.565529 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 07:00:22.565538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 07:00:22.565547 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 07:00:22.565556 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 07:00:22.565566 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 07:00:22.565576 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 07:00:22.565585 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 07:00:22.565594 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 07:00:22.565604 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:00:22.565613 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:00:22.565622 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 07:00:22.565631 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:00:22.565641 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:00:22.565652 systemd[1]: Stopped systemd-journald.service - Journal Service. Jul 2 07:00:22.565661 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 07:00:22.565670 kernel: loop: module loaded Jul 2 07:00:22.565679 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 07:00:22.565688 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 07:00:22.565697 kernel: fuse: init (API version 7.37) Jul 2 07:00:22.565706 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 07:00:22.565718 systemd-journald[1068]: Journal started Jul 2 07:00:22.565753 systemd-journald[1068]: Runtime Journal (/run/log/journal/efcef03105454b9a97a8d3afa09c3dfb) is 6.0M, max 48.3M, 42.3M free. Jul 2 07:00:21.642000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:00:21.986000 audit: BPF prog-id=10 op=LOAD Jul 2 07:00:21.986000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:00:21.986000 audit: BPF prog-id=11 op=LOAD Jul 2 07:00:21.986000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:00:22.364000 audit: BPF prog-id=12 op=LOAD Jul 2 07:00:22.364000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:00:22.364000 audit: BPF prog-id=13 op=LOAD Jul 2 07:00:22.364000 audit: BPF prog-id=14 op=LOAD Jul 2 07:00:22.365000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:00:22.365000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:00:22.365000 audit: BPF prog-id=15 op=LOAD Jul 2 07:00:22.365000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:00:22.366000 audit: BPF prog-id=16 op=LOAD Jul 2 07:00:22.366000 audit: BPF prog-id=17 op=LOAD Jul 2 07:00:22.366000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:00:22.366000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:00:22.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.381000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:00:22.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.523000 audit: BPF prog-id=18 op=LOAD Jul 2 07:00:22.523000 audit: BPF prog-id=19 op=LOAD Jul 2 07:00:22.523000 audit: BPF prog-id=20 op=LOAD Jul 2 07:00:22.523000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:00:22.523000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:00:22.562000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:00:22.562000 audit[1068]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffc38fd13b0 a2=4000 a3=7ffc38fd144c items=0 ppid=1 pid=1068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:22.562000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:00:22.353478 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:00:22.353488 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 07:00:22.366920 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:00:22.570168 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 07:00:22.572436 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:00:22.572472 systemd[1]: Stopped verity-setup.service. Jul 2 07:00:22.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.574433 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:00:22.577526 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 07:00:22.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.578069 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 07:00:22.579226 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 07:00:22.580432 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 07:00:22.581496 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 07:00:22.582737 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 07:00:22.583909 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 07:00:22.585082 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 07:00:22.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.586416 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:00:22.586539 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 07:00:22.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.587939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:00:22.588045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 07:00:22.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.589366 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:00:22.589483 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 07:00:22.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.591098 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:00:22.591202 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 07:00:22.592528 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:00:22.592631 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 07:00:22.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.593928 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 07:00:22.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.595269 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 07:00:22.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.596581 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 07:00:22.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.598321 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 07:00:22.600427 kernel: ACPI: bus type drm_connector registered Jul 2 07:00:22.605527 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 07:00:22.607629 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 07:00:22.608675 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:00:22.613181 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 07:00:22.615822 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 07:00:22.617043 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:00:22.618127 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jul 2 07:00:22.619480 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 07:00:22.620748 systemd-journald[1068]: Time spent on flushing to /var/log/journal/efcef03105454b9a97a8d3afa09c3dfb is 14.225ms for 1103 entries. Jul 2 07:00:22.620748 systemd-journald[1068]: System Journal (/var/log/journal/efcef03105454b9a97a8d3afa09c3dfb) is 8.0M, max 195.6M, 187.6M free. Jul 2 07:00:22.850489 systemd-journald[1068]: Received client request to flush runtime journal. Jul 2 07:00:22.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:22.620885 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 07:00:22.625352 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:00:22.851472 udevadm[1094]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 07:00:22.625520 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 07:00:22.627135 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 07:00:22.628650 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 07:00:22.629999 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 07:00:22.632474 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 07:00:22.652117 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 07:00:22.708808 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jul 2 07:00:22.710091 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 07:00:22.735468 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 07:00:22.747531 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 07:00:22.811925 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 07:00:22.851673 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 07:00:22.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:23.261776 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 07:00:23.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:23.263000 audit: BPF prog-id=21 op=LOAD Jul 2 07:00:23.263000 audit: BPF prog-id=22 op=LOAD Jul 2 07:00:23.263000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:00:23.263000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:00:23.277705 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 07:00:23.293852 systemd-udevd[1106]: Using default interface naming scheme 'v252'. Jul 2 07:00:23.309265 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 07:00:23.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:23.311000 audit: BPF prog-id=23 op=LOAD Jul 2 07:00:23.318539 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 07:00:23.322000 audit: BPF prog-id=24 op=LOAD Jul 2 07:00:23.322000 audit: BPF prog-id=25 op=LOAD Jul 2 07:00:23.322000 audit: BPF prog-id=26 op=LOAD Jul 2 07:00:23.323871 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 07:00:23.330179 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 07:00:23.335405 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1118) Jul 2 07:00:23.338404 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1114) Jul 2 07:00:23.360847 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 07:00:23.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:23.369733 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 07:00:23.386408 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:00:23.389410 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 0 Jul 2 07:00:23.391441 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:00:23.409420 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 07:00:23.424014 systemd-networkd[1112]: lo: Link UP Jul 2 07:00:23.424302 systemd-networkd[1112]: lo: Gained carrier Jul 2 07:00:23.424746 systemd-networkd[1112]: Enumeration completed Jul 2 07:00:23.424881 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 07:00:23.425113 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 07:00:23.425171 systemd-networkd[1112]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:00:23.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:23.428497 systemd-networkd[1112]: eth0: Link UP Jul 2 07:00:23.428556 systemd-networkd[1112]: eth0: Gained carrier Jul 2 07:00:23.428604 systemd-networkd[1112]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 07:00:23.431747 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 07:00:23.444524 systemd-networkd[1112]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 07:00:23.453445 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:00:23.514672 kernel: SVM: TSC scaling supported Jul 2 07:00:23.514769 kernel: kvm: Nested Virtualization enabled Jul 2 07:00:23.514786 kernel: SVM: kvm: Nested Paging enabled Jul 2 07:00:23.514819 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 2 07:00:23.515598 kernel: SVM: Virtual GIF supported Jul 2 07:00:23.515631 kernel: SVM: LBR virtualization supported Jul 2 07:00:23.533427 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:00:23.571806 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 07:00:23.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:23.580520 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 07:00:23.587689 lvm[1144]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:00:23.613226 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 07:00:23.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:23.614674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 07:00:23.626567 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 07:00:23.630115 lvm[1145]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:00:23.655271 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 07:00:23.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:23.656632 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 07:00:23.657854 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:00:23.657875 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 07:00:23.659112 systemd[1]: Reached target machines.target - Containers. Jul 2 07:00:23.670727 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 07:00:23.672124 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 07:00:23.672196 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:00:23.673641 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jul 2 07:00:23.676263 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 07:00:23.678566 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 07:00:23.681130 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 07:00:23.683111 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1147 (bootctl) Jul 2 07:00:23.685037 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jul 2 07:00:23.699819 kernel: loop0: detected capacity change from 0 to 80600 Jul 2 07:00:23.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:23.694927 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 07:00:23.951508 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:00:23.956459 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:00:23.957025 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 07:00:23.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:23.962779 systemd-fsck[1154]: fsck.fat 4.2 (2021-01-31) Jul 2 07:00:23.962779 systemd-fsck[1154]: /dev/vda1: 809 files, 120401/258078 clusters Jul 2 07:00:23.964378 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jul 2 07:00:23.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:23.974719 kernel: loop1: detected capacity change from 0 to 139360 Jul 2 07:00:23.974721 systemd[1]: Mounting boot.mount - Boot partition... Jul 2 07:00:23.983012 systemd[1]: Mounted boot.mount - Boot partition. Jul 2 07:00:23.999769 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jul 2 07:00:24.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:24.006411 kernel: loop2: detected capacity change from 0 to 210664 Jul 2 07:00:24.037406 kernel: loop3: detected capacity change from 0 to 80600 Jul 2 07:00:24.043413 kernel: loop4: detected capacity change from 0 to 139360 Jul 2 07:00:24.051410 kernel: loop5: detected capacity change from 0 to 210664 Jul 2 07:00:24.055758 (sd-sysext)[1162]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 07:00:24.057047 (sd-sysext)[1162]: Merged extensions into '/usr'. Jul 2 07:00:24.058837 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 07:00:24.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:24.065591 systemd[1]: Starting ensure-sysext.service... Jul 2 07:00:24.068138 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 07:00:24.080323 systemd[1]: Reloading. Jul 2 07:00:24.083472 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:00:24.085086 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:00:24.085615 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 07:00:24.086776 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:00:24.112525 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:00:24.208785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:00:24.273000 audit: BPF prog-id=27 op=LOAD Jul 2 07:00:24.273000 audit: BPF prog-id=28 op=LOAD Jul 2 07:00:24.273000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:00:24.273000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:00:24.274000 audit: BPF prog-id=29 op=LOAD Jul 2 07:00:24.274000 audit: BPF prog-id=24 op=UNLOAD Jul 2 07:00:24.274000 audit: BPF prog-id=30 op=LOAD Jul 2 07:00:24.274000 audit: BPF prog-id=31 op=LOAD Jul 2 07:00:24.274000 audit: BPF prog-id=25 op=UNLOAD Jul 2 07:00:24.274000 audit: BPF prog-id=26 op=UNLOAD Jul 2 07:00:24.276000 audit: BPF prog-id=32 op=LOAD Jul 2 07:00:24.276000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:00:24.276000 audit: BPF prog-id=33 op=LOAD Jul 2 07:00:24.276000 audit: BPF prog-id=34 op=LOAD Jul 2 07:00:24.276000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:00:24.276000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:00:24.277000 audit: BPF prog-id=35 op=LOAD Jul 2 07:00:24.277000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:00:24.279645 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 07:00:24.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:24.282174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 07:00:24.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:24.285954 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 07:00:24.288708 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 07:00:24.290944 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 07:00:24.292000 audit: BPF prog-id=36 op=LOAD Jul 2 07:00:24.293920 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 07:00:24.295000 audit: BPF prog-id=37 op=LOAD Jul 2 07:00:24.297590 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 07:00:24.300225 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 07:00:24.305000 audit[1232]: SYSTEM_BOOT pid=1232 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:00:24.309271 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:00:24.309548 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 07:00:24.311246 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 07:00:24.314280 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 07:00:24.317261 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 07:00:24.318602 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 07:00:24.318803 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:00:24.318978 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:00:24.320669 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 07:00:24.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:24.322754 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:00:24.322911 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 07:00:24.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:24.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:24.324860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:00:24.324970 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 07:00:24.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:24.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:24.326842 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:00:24.326936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 07:00:24.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:24.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:24.328836 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:00:24.328993 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 07:00:24.336000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:00:24.336000 audit[1245]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffeb2639c20 a2=420 a3=0 items=0 ppid=1221 pid=1245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:24.336000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:00:24.337094 augenrules[1245]: No rules Jul 2 07:00:24.337722 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 07:00:24.339786 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 07:00:24.341590 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 07:00:24.343302 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 07:00:24.346697 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 07:00:24.350283 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:00:24.350573 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 07:00:24.358002 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 07:00:24.361312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 07:00:25.442655 systemd-timesyncd[1231]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 07:00:25.442873 systemd-timesyncd[1231]: Initial clock synchronization to Tue 2024-07-02 07:00:25.442599 UTC. Jul 2 07:00:25.443708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 07:00:25.444848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 07:00:25.444960 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:00:25.445056 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:00:25.445202 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:00:25.445915 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 07:00:25.447786 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:00:25.447900 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 07:00:25.448443 systemd-resolved[1227]: Positive Trust Anchors: Jul 2 07:00:25.448460 systemd-resolved[1227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:00:25.448490 systemd-resolved[1227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:00:25.449422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:00:25.449574 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 07:00:25.451092 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:00:25.451293 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 07:00:25.451923 systemd-resolved[1227]: Defaulting to hostname 'linux'. Jul 2 07:00:25.454680 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 07:00:25.456258 systemd[1]: Reached target network.target - Network. Jul 2 07:00:25.457464 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 07:00:25.458880 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 07:00:25.460008 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:00:25.460246 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 07:00:25.475572 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 07:00:25.478341 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 07:00:25.480926 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 07:00:25.483234 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 07:00:25.484590 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 07:00:25.484823 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:00:25.485009 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:00:25.485155 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:00:25.486672 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:00:25.486859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 07:00:25.488495 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:00:25.488656 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 07:00:25.490203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:00:25.490357 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 07:00:25.491909 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:00:25.492071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 07:00:25.493796 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:00:25.493906 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 07:00:25.495139 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 07:00:25.496324 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 07:00:25.497635 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 07:00:25.498864 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 07:00:25.500001 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 07:00:25.501170 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:00:25.501210 systemd[1]: Reached target paths.target - Path Units. Jul 2 07:00:25.502156 systemd[1]: Reached target timers.target - Timer Units. Jul 2 07:00:25.503868 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 07:00:25.507466 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 07:00:25.518238 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 07:00:25.519501 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:00:25.519560 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 07:00:25.520264 systemd[1]: Finished ensure-sysext.service. Jul 2 07:00:25.521320 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 07:00:25.523346 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 07:00:25.524404 systemd[1]: Reached target basic.target - Basic System. Jul 2 07:00:25.525442 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 07:00:25.525465 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 07:00:25.526564 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 07:00:25.528749 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 07:00:25.530872 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 07:00:25.533779 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 07:00:25.534390 jq[1262]: false Jul 2 07:00:25.534886 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 07:00:25.536282 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 07:00:25.538481 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 07:00:25.540654 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 07:00:25.543060 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 07:00:25.546786 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 07:00:25.548034 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:00:25.548112 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 07:00:25.548585 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:00:25.549469 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 07:00:25.549582 extend-filesystems[1263]: Found loop3 Jul 2 07:00:25.551575 extend-filesystems[1263]: Found loop4 Jul 2 07:00:25.551575 extend-filesystems[1263]: Found loop5 Jul 2 07:00:25.551575 extend-filesystems[1263]: Found sr0 Jul 2 07:00:25.551575 extend-filesystems[1263]: Found vda Jul 2 07:00:25.551575 extend-filesystems[1263]: Found vda1 Jul 2 07:00:25.551575 extend-filesystems[1263]: Found vda2 Jul 2 07:00:25.551575 extend-filesystems[1263]: Found vda3 Jul 2 07:00:25.551575 extend-filesystems[1263]: Found usr Jul 2 07:00:25.551575 extend-filesystems[1263]: Found vda4 Jul 2 07:00:25.551575 extend-filesystems[1263]: Found vda6 Jul 2 07:00:25.551575 extend-filesystems[1263]: Found vda7 Jul 2 07:00:25.551575 extend-filesystems[1263]: Found vda9 Jul 2 07:00:25.551575 extend-filesystems[1263]: Checking size of /dev/vda9 Jul 2 07:00:25.596556 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1109) Jul 2 07:00:25.596585 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 07:00:25.551886 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 07:00:25.596698 extend-filesystems[1263]: Resized partition /dev/vda9 Jul 2 07:00:25.561500 dbus-daemon[1261]: [system] SELinux support is enabled Jul 2 07:00:25.597926 update_engine[1277]: I0702 07:00:25.561812 1277 main.cc:92] Flatcar Update Engine starting Jul 2 07:00:25.597926 update_engine[1277]: I0702 07:00:25.563740 1277 update_check_scheduler.cc:74] Next update check in 11m15s Jul 2 07:00:25.566937 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 07:00:25.598259 extend-filesystems[1284]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 07:00:25.599386 jq[1279]: true Jul 2 07:00:25.577487 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:00:25.577660 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 07:00:25.599799 tar[1287]: linux-amd64/helm Jul 2 07:00:25.577946 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:00:25.600080 jq[1288]: true Jul 2 07:00:25.578099 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 07:00:25.579464 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:00:25.579614 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 07:00:25.584448 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:00:25.584471 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 07:00:25.584560 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:00:25.584572 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 07:00:25.585642 systemd[1]: Started update-engine.service - Update Engine. Jul 2 07:00:25.587967 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 07:00:25.638159 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 07:00:25.641185 locksmithd[1290]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:00:25.665202 systemd-logind[1274]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:00:25.670061 extend-filesystems[1284]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 07:00:25.670061 extend-filesystems[1284]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 07:00:25.670061 extend-filesystems[1284]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 07:00:25.665233 systemd-logind[1274]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:00:25.676208 extend-filesystems[1263]: Resized filesystem in /dev/vda9 Jul 2 07:00:25.665623 systemd-logind[1274]: New seat seat0. Jul 2 07:00:25.668380 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:00:25.668556 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 07:00:25.671147 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 07:00:25.680598 bash[1306]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:00:25.682718 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 07:00:25.684858 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 07:00:25.685221 systemd-networkd[1112]: eth0: Gained IPv6LL Jul 2 07:00:25.689537 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 07:00:25.691371 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 07:00:25.701699 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 07:00:25.704682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:00:25.707457 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 07:00:25.715229 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 07:00:25.715412 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 07:00:25.717185 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 07:00:25.728261 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 07:00:25.810031 containerd[1289]: time="2024-07-02T07:00:25.809887625Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jul 2 07:00:25.842233 containerd[1289]: time="2024-07-02T07:00:25.842182099Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 07:00:25.842450 containerd[1289]: time="2024-07-02T07:00:25.842432428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:00:25.844017 containerd[1289]: time="2024-07-02T07:00:25.843986432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:00:25.844103 containerd[1289]: time="2024-07-02T07:00:25.844087231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:00:25.844460 containerd[1289]: time="2024-07-02T07:00:25.844437218Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:00:25.844544 containerd[1289]: time="2024-07-02T07:00:25.844528799Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:00:25.844680 containerd[1289]: time="2024-07-02T07:00:25.844662460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 07:00:25.844811 containerd[1289]: time="2024-07-02T07:00:25.844791763Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:00:25.844878 containerd[1289]: time="2024-07-02T07:00:25.844863457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:00:25.845018 containerd[1289]: time="2024-07-02T07:00:25.845001145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:00:25.845325 containerd[1289]: time="2024-07-02T07:00:25.845307129Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:00:25.845404 containerd[1289]: time="2024-07-02T07:00:25.845388792Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:00:25.845465 containerd[1289]: time="2024-07-02T07:00:25.845451179Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:00:25.845659 containerd[1289]: time="2024-07-02T07:00:25.845638841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:00:25.845727 containerd[1289]: time="2024-07-02T07:00:25.845712720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:00:25.845849 containerd[1289]: time="2024-07-02T07:00:25.845831523Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:00:25.845913 containerd[1289]: time="2024-07-02T07:00:25.845899720Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:00:25.854978 containerd[1289]: time="2024-07-02T07:00:25.854660290Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:00:25.854978 containerd[1289]: time="2024-07-02T07:00:25.854698722Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:00:25.854978 containerd[1289]: time="2024-07-02T07:00:25.854717507Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:00:25.854978 containerd[1289]: time="2024-07-02T07:00:25.854750499Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 07:00:25.854978 containerd[1289]: time="2024-07-02T07:00:25.854781717Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 07:00:25.854978 containerd[1289]: time="2024-07-02T07:00:25.854794401Z" level=info msg="NRI interface is disabled by configuration." Jul 2 07:00:25.854978 containerd[1289]: time="2024-07-02T07:00:25.854808638Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:00:25.854978 containerd[1289]: time="2024-07-02T07:00:25.854907373Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 07:00:25.854978 containerd[1289]: time="2024-07-02T07:00:25.854924696Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 07:00:25.854978 containerd[1289]: time="2024-07-02T07:00:25.854941206Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 07:00:25.854978 containerd[1289]: time="2024-07-02T07:00:25.854956665Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 07:00:25.854978 containerd[1289]: time="2024-07-02T07:00:25.854970912Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:00:25.855309 containerd[1289]: time="2024-07-02T07:00:25.854988395Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:00:25.855309 containerd[1289]: time="2024-07-02T07:00:25.855002592Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:00:25.855309 containerd[1289]: time="2024-07-02T07:00:25.855015035Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:00:25.855309 containerd[1289]: time="2024-07-02T07:00:25.855029642Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:00:25.855309 containerd[1289]: time="2024-07-02T07:00:25.855045171Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:00:25.855309 containerd[1289]: time="2024-07-02T07:00:25.855061252Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:00:25.855309 containerd[1289]: time="2024-07-02T07:00:25.855078284Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:00:25.855309 containerd[1289]: time="2024-07-02T07:00:25.855199070Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:00:25.857296 containerd[1289]: time="2024-07-02T07:00:25.857258733Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:00:25.857359 containerd[1289]: time="2024-07-02T07:00:25.857309428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857359 containerd[1289]: time="2024-07-02T07:00:25.857343652Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 07:00:25.857424 containerd[1289]: time="2024-07-02T07:00:25.857380591Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:00:25.857477 containerd[1289]: time="2024-07-02T07:00:25.857453067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857546 containerd[1289]: time="2024-07-02T07:00:25.857480569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857546 containerd[1289]: time="2024-07-02T07:00:25.857501528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857546 containerd[1289]: time="2024-07-02T07:00:25.857520043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857546 containerd[1289]: time="2024-07-02T07:00:25.857539659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857643 containerd[1289]: time="2024-07-02T07:00:25.857560719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857643 containerd[1289]: time="2024-07-02T07:00:25.857578763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857643 containerd[1289]: time="2024-07-02T07:00:25.857595574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857643 containerd[1289]: time="2024-07-02T07:00:25.857614079Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:00:25.857782 containerd[1289]: time="2024-07-02T07:00:25.857750004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857820 containerd[1289]: time="2024-07-02T07:00:25.857792253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857820 containerd[1289]: time="2024-07-02T07:00:25.857812651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857869 containerd[1289]: time="2024-07-02T07:00:25.857833200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857869 containerd[1289]: time="2024-07-02T07:00:25.857853037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857926 containerd[1289]: time="2024-07-02T07:00:25.857874227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857926 containerd[1289]: time="2024-07-02T07:00:25.857894134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.857926 containerd[1289]: time="2024-07-02T07:00:25.857913611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:00:25.859118 containerd[1289]: time="2024-07-02T07:00:25.859037609Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:00:25.859357 containerd[1289]: time="2024-07-02T07:00:25.859340707Z" level=info msg="Connect containerd service" Jul 2 07:00:25.859450 containerd[1289]: time="2024-07-02T07:00:25.859435745Z" level=info msg="using legacy CRI server" Jul 2 07:00:25.859509 containerd[1289]: time="2024-07-02T07:00:25.859495838Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 07:00:25.859594 containerd[1289]: time="2024-07-02T07:00:25.859578643Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:00:25.860337 containerd[1289]: time="2024-07-02T07:00:25.860313521Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:00:25.861747 containerd[1289]: time="2024-07-02T07:00:25.861728665Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:00:25.861964 containerd[1289]: time="2024-07-02T07:00:25.861929221Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:00:25.862058 containerd[1289]: time="2024-07-02T07:00:25.862042324Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:00:25.862171 containerd[1289]: time="2024-07-02T07:00:25.862153312Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:00:25.862480 containerd[1289]: time="2024-07-02T07:00:25.861852257Z" level=info msg="Start subscribing containerd event" Jul 2 07:00:25.862565 containerd[1289]: time="2024-07-02T07:00:25.862551328Z" level=info msg="Start recovering state" Jul 2 07:00:25.862677 containerd[1289]: time="2024-07-02T07:00:25.862663759Z" level=info msg="Start event monitor" Jul 2 07:00:25.862738 containerd[1289]: time="2024-07-02T07:00:25.862726086Z" level=info msg="Start snapshots syncer" Jul 2 07:00:25.862809 containerd[1289]: time="2024-07-02T07:00:25.862795286Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:00:25.862876 containerd[1289]: time="2024-07-02T07:00:25.862862842Z" level=info msg="Start streaming server" Jul 2 07:00:25.863185 containerd[1289]: time="2024-07-02T07:00:25.863169157Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:00:25.863295 containerd[1289]: time="2024-07-02T07:00:25.863281558Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:00:25.868220 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 07:00:25.871356 containerd[1289]: time="2024-07-02T07:00:25.871335211Z" level=info msg="containerd successfully booted in 0.063478s" Jul 2 07:00:26.054969 tar[1287]: linux-amd64/LICENSE Jul 2 07:00:26.055214 tar[1287]: linux-amd64/README.md Jul 2 07:00:26.065643 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 07:00:26.311694 sshd_keygen[1281]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:00:26.333583 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 07:00:26.349534 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 07:00:26.356516 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:00:26.356686 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 07:00:26.361374 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 07:00:26.363873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:00:26.370067 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 07:00:26.373042 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 07:00:26.375821 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 07:00:26.377450 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 07:00:26.378795 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 07:00:26.381758 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jul 2 07:00:26.387776 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:00:26.387902 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jul 2 07:00:26.389491 systemd[1]: Startup finished in 639ms (kernel) + 3.968s (initrd) + 3.704s (userspace) = 8.313s. Jul 2 07:00:26.824143 kubelet[1347]: E0702 07:00:26.823990 1347 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:00:26.825856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:00:26.825978 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:00:34.691574 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 07:00:34.692651 systemd[1]: Started sshd@0-10.0.0.127:22-10.0.0.1:53236.service - OpenSSH per-connection server daemon (10.0.0.1:53236). Jul 2 07:00:34.729808 sshd[1359]: Accepted publickey for core from 10.0.0.1 port 53236 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:00:34.731367 sshd[1359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:00:34.739216 systemd-logind[1274]: New session 1 of user core. Jul 2 07:00:34.740109 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 07:00:34.749392 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 07:00:34.758362 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 07:00:34.759789 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 07:00:34.762429 (systemd)[1362]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:00:34.826367 systemd[1362]: Queued start job for default target default.target. Jul 2 07:00:34.838443 systemd[1362]: Reached target paths.target - Paths. Jul 2 07:00:34.838461 systemd[1362]: Reached target sockets.target - Sockets. Jul 2 07:00:34.838471 systemd[1362]: Reached target timers.target - Timers. Jul 2 07:00:34.838480 systemd[1362]: Reached target basic.target - Basic System. Jul 2 07:00:34.838525 systemd[1362]: Reached target default.target - Main User Target. Jul 2 07:00:34.838545 systemd[1362]: Startup finished in 71ms. Jul 2 07:00:34.838625 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 07:00:34.839874 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 07:00:34.900370 systemd[1]: Started sshd@1-10.0.0.127:22-10.0.0.1:53250.service - OpenSSH per-connection server daemon (10.0.0.1:53250). Jul 2 07:00:34.926756 sshd[1371]: Accepted publickey for core from 10.0.0.1 port 53250 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:00:34.927703 sshd[1371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:00:34.930970 systemd-logind[1274]: New session 2 of user core. Jul 2 07:00:34.940253 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 07:00:34.992076 sshd[1371]: pam_unix(sshd:session): session closed for user core Jul 2 07:00:35.001935 systemd[1]: sshd@1-10.0.0.127:22-10.0.0.1:53250.service: Deactivated successfully. Jul 2 07:00:35.002458 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:00:35.002929 systemd-logind[1274]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:00:35.003981 systemd[1]: Started sshd@2-10.0.0.127:22-10.0.0.1:53262.service - OpenSSH per-connection server daemon (10.0.0.1:53262). Jul 2 07:00:35.004717 systemd-logind[1274]: Removed session 2. Jul 2 07:00:35.030328 sshd[1377]: Accepted publickey for core from 10.0.0.1 port 53262 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:00:35.031379 sshd[1377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:00:35.034551 systemd-logind[1274]: New session 3 of user core. Jul 2 07:00:35.044243 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 07:00:35.094183 sshd[1377]: pam_unix(sshd:session): session closed for user core Jul 2 07:00:35.099901 systemd[1]: sshd@2-10.0.0.127:22-10.0.0.1:53262.service: Deactivated successfully. Jul 2 07:00:35.100367 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:00:35.100770 systemd-logind[1274]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:00:35.101741 systemd[1]: Started sshd@3-10.0.0.127:22-10.0.0.1:53270.service - OpenSSH per-connection server daemon (10.0.0.1:53270). Jul 2 07:00:35.102256 systemd-logind[1274]: Removed session 3. Jul 2 07:00:35.128631 sshd[1383]: Accepted publickey for core from 10.0.0.1 port 53270 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:00:35.129629 sshd[1383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:00:35.132618 systemd-logind[1274]: New session 4 of user core. Jul 2 07:00:35.143245 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 07:00:35.198487 sshd[1383]: pam_unix(sshd:session): session closed for user core Jul 2 07:00:35.209980 systemd[1]: sshd@3-10.0.0.127:22-10.0.0.1:53270.service: Deactivated successfully. Jul 2 07:00:35.210615 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:00:35.211206 systemd-logind[1274]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:00:35.212570 systemd[1]: Started sshd@4-10.0.0.127:22-10.0.0.1:53280.service - OpenSSH per-connection server daemon (10.0.0.1:53280). Jul 2 07:00:35.213218 systemd-logind[1274]: Removed session 4. Jul 2 07:00:35.240615 sshd[1389]: Accepted publickey for core from 10.0.0.1 port 53280 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:00:35.241704 sshd[1389]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:00:35.245030 systemd-logind[1274]: New session 5 of user core. Jul 2 07:00:35.257370 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 07:00:35.314211 sudo[1392]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 07:00:35.314454 sudo[1392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:00:35.332736 sudo[1392]: pam_unix(sudo:session): session closed for user root Jul 2 07:00:35.334539 sshd[1389]: pam_unix(sshd:session): session closed for user core Jul 2 07:00:35.347388 systemd[1]: sshd@4-10.0.0.127:22-10.0.0.1:53280.service: Deactivated successfully. Jul 2 07:00:35.347947 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:00:35.348549 systemd-logind[1274]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:00:35.349836 systemd[1]: Started sshd@5-10.0.0.127:22-10.0.0.1:53286.service - OpenSSH per-connection server daemon (10.0.0.1:53286). Jul 2 07:00:35.350479 systemd-logind[1274]: Removed session 5. Jul 2 07:00:35.377696 sshd[1396]: Accepted publickey for core from 10.0.0.1 port 53286 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:00:35.378946 sshd[1396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:00:35.382625 systemd-logind[1274]: New session 6 of user core. Jul 2 07:00:35.392265 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 07:00:35.445430 sudo[1400]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 07:00:35.445654 sudo[1400]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:00:35.448761 sudo[1400]: pam_unix(sudo:session): session closed for user root Jul 2 07:00:35.454019 sudo[1399]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 07:00:35.454348 sudo[1399]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:00:35.482633 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 07:00:35.482000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 07:00:35.484085 auditctl[1403]: No rules Jul 2 07:00:35.484397 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 07:00:35.484586 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 07:00:35.484735 kernel: kauditd_printk_skb: 96 callbacks suppressed Jul 2 07:00:35.484786 kernel: audit: type=1305 audit(1719903635.482:186): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 07:00:35.482000 audit[1403]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc42a4c3c0 a2=420 a3=0 items=0 ppid=1 pid=1403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:35.486665 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 07:00:35.489970 kernel: audit: type=1300 audit(1719903635.482:186): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc42a4c3c0 a2=420 a3=0 items=0 ppid=1 pid=1403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:35.490039 kernel: audit: type=1327 audit(1719903635.482:186): proctitle=2F7362696E2F617564697463746C002D44 Jul 2 07:00:35.482000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 2 07:00:35.491088 kernel: audit: type=1131 audit(1719903635.482:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.508782 augenrules[1420]: No rules Jul 2 07:00:35.509481 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 07:00:35.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.510518 sudo[1399]: pam_unix(sudo:session): session closed for user root Jul 2 07:00:35.511821 sshd[1396]: pam_unix(sshd:session): session closed for user core Jul 2 07:00:35.509000 audit[1399]: USER_END pid=1399 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.515342 kernel: audit: type=1130 audit(1719903635.508:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.515392 kernel: audit: type=1106 audit(1719903635.509:189): pid=1399 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.515416 kernel: audit: type=1104 audit(1719903635.509:190): pid=1399 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.509000 audit[1399]: CRED_DISP pid=1399 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.511000 audit[1396]: USER_END pid=1396 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:00:35.520338 systemd[1]: sshd@5-10.0.0.127:22-10.0.0.1:53286.service: Deactivated successfully. Jul 2 07:00:35.520853 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:00:35.521705 kernel: audit: type=1106 audit(1719903635.511:191): pid=1396 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:00:35.521847 kernel: audit: type=1104 audit(1719903635.511:192): pid=1396 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:00:35.511000 audit[1396]: CRED_DISP pid=1396 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:00:35.523716 systemd-logind[1274]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:00:35.524655 systemd[1]: Started sshd@6-10.0.0.127:22-10.0.0.1:53294.service - OpenSSH per-connection server daemon (10.0.0.1:53294). Jul 2 07:00:35.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.127:22-10.0.0.1:53286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.525660 systemd-logind[1274]: Removed session 6. Jul 2 07:00:35.527966 kernel: audit: type=1131 audit(1719903635.519:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.127:22-10.0.0.1:53286 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.127:22-10.0.0.1:53294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.553000 audit[1426]: USER_ACCT pid=1426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:00:35.554764 sshd[1426]: Accepted publickey for core from 10.0.0.1 port 53294 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:00:35.554000 audit[1426]: CRED_ACQ pid=1426 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:00:35.554000 audit[1426]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb72f5b70 a2=3 a3=7f239065e480 items=0 ppid=1 pid=1426 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:35.554000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:00:35.555737 sshd[1426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:00:35.559482 systemd-logind[1274]: New session 7 of user core. Jul 2 07:00:35.569269 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 07:00:35.572000 audit[1426]: USER_START pid=1426 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:00:35.573000 audit[1428]: CRED_ACQ pid=1428 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:00:35.620000 audit[1429]: USER_ACCT pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.620000 audit[1429]: CRED_REFR pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.621642 sudo[1429]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:00:35.621848 sudo[1429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:00:35.622000 audit[1429]: USER_START pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:00:35.734430 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 07:00:36.312602 dockerd[1439]: time="2024-07-02T07:00:36.312524789Z" level=info msg="Starting up" Jul 2 07:00:36.364882 dockerd[1439]: time="2024-07-02T07:00:36.364831322Z" level=info msg="Loading containers: start." Jul 2 07:00:36.437000 audit[1474]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.437000 audit[1474]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc2f104c10 a2=0 a3=7f475230ee90 items=0 ppid=1439 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.437000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 2 07:00:36.439000 audit[1476]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.439000 audit[1476]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe58a85870 a2=0 a3=7fca967fde90 items=0 ppid=1439 pid=1476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.439000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 2 07:00:36.441000 audit[1478]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.441000 audit[1478]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff18d16530 a2=0 a3=7f3374195e90 items=0 ppid=1439 pid=1478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.441000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 07:00:36.442000 audit[1480]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.442000 audit[1480]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe4d59a930 a2=0 a3=7f4144fa2e90 items=0 ppid=1439 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.442000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 07:00:36.445000 audit[1482]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.445000 audit[1482]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff1a3abc70 a2=0 a3=7fc40beeee90 items=0 ppid=1439 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.445000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 2 07:00:36.446000 audit[1484]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.446000 audit[1484]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd48db5800 a2=0 a3=7fcf1d47ee90 items=0 ppid=1439 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.446000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 2 07:00:36.462000 audit[1486]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.462000 audit[1486]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd28366710 a2=0 a3=7fe45c5f4e90 items=0 ppid=1439 pid=1486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.462000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 2 07:00:36.464000 audit[1488]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.464000 audit[1488]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd2d0be560 a2=0 a3=7f75a6722e90 items=0 ppid=1439 pid=1488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.464000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 2 07:00:36.466000 audit[1490]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.466000 audit[1490]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd56e7c5f0 a2=0 a3=7fae4e68be90 items=0 ppid=1439 pid=1490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.466000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:00:36.475000 audit[1494]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.475000 audit[1494]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff141d8f60 a2=0 a3=7f529f8dae90 items=0 ppid=1439 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.475000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:00:36.476000 audit[1495]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.476000 audit[1495]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc1c4b2880 a2=0 a3=7f6e857f7e90 items=0 ppid=1439 pid=1495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.476000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:00:36.485180 kernel: Initializing XFRM netlink socket Jul 2 07:00:36.516000 audit[1504]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.516000 audit[1504]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd1af922a0 a2=0 a3=7fc54cab1e90 items=0 ppid=1439 pid=1504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.516000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 2 07:00:36.531000 audit[1507]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.531000 audit[1507]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffda7e82f10 a2=0 a3=7fd216188e90 items=0 ppid=1439 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.531000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 2 07:00:36.536000 audit[1511]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.536000 audit[1511]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff78ecdb70 a2=0 a3=7fdbed998e90 items=0 ppid=1439 pid=1511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.536000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 2 07:00:36.537000 audit[1513]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.537000 audit[1513]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc6b0d5900 a2=0 a3=7fbc3b343e90 items=0 ppid=1439 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.537000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 2 07:00:36.539000 audit[1515]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.539000 audit[1515]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc38cae260 a2=0 a3=7f03fe0ade90 items=0 ppid=1439 pid=1515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.539000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 2 07:00:36.541000 audit[1517]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.541000 audit[1517]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffe211fd3a0 a2=0 a3=7fd4d1bdbe90 items=0 ppid=1439 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.541000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 2 07:00:36.543000 audit[1519]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.543000 audit[1519]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffeaf450e10 a2=0 a3=7fb61177de90 items=0 ppid=1439 pid=1519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.543000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 2 07:00:36.549000 audit[1522]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.549000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff72a614a0 a2=0 a3=7fb3ad0d9e90 items=0 ppid=1439 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.549000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 2 07:00:36.551000 audit[1524]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.551000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffd1a10fdb0 a2=0 a3=7fcd72b8ee90 items=0 ppid=1439 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.551000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 07:00:36.552000 audit[1526]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.552000 audit[1526]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd25cf9480 a2=0 a3=7f7c52985e90 items=0 ppid=1439 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.552000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 07:00:36.554000 audit[1528]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.554000 audit[1528]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd455db650 a2=0 a3=7fe879a3fe90 items=0 ppid=1439 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.554000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 2 07:00:36.556580 systemd-networkd[1112]: docker0: Link UP Jul 2 07:00:36.567000 audit[1532]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.567000 audit[1532]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcaabace90 a2=0 a3=7f042bb32e90 items=0 ppid=1439 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.567000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:00:36.568000 audit[1533]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:00:36.568000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffe07c0390 a2=0 a3=7fb12cbf5e90 items=0 ppid=1439 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:00:36.568000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:00:36.570108 dockerd[1439]: time="2024-07-02T07:00:36.570064738Z" level=info msg="Loading containers: done." Jul 2 07:00:36.749315 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1933152321-merged.mount: Deactivated successfully. Jul 2 07:00:36.905272 dockerd[1439]: time="2024-07-02T07:00:36.905155763Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:00:36.905407 dockerd[1439]: time="2024-07-02T07:00:36.905392828Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 07:00:36.905537 dockerd[1439]: time="2024-07-02T07:00:36.905520467Z" level=info msg="Daemon has completed initialization" Jul 2 07:00:36.943765 dockerd[1439]: time="2024-07-02T07:00:36.943701129Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:00:36.946492 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 07:00:36.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:36.947301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:00:36.947419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:00:36.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:36.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:36.956535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:00:37.073015 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:00:37.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:37.161242 kubelet[1576]: E0702 07:00:37.161113 1576 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:00:37.165016 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:00:37.165173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:00:37.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 07:00:37.781607 containerd[1289]: time="2024-07-02T07:00:37.781554995Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 07:00:40.557382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount535601796.mount: Deactivated successfully. Jul 2 07:00:41.964516 containerd[1289]: time="2024-07-02T07:00:41.964459453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:41.965203 containerd[1289]: time="2024-07-02T07:00:41.965137245Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=32771801" Jul 2 07:00:41.966539 containerd[1289]: time="2024-07-02T07:00:41.966509288Z" level=info msg="ImageCreate event name:\"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:41.968638 containerd[1289]: time="2024-07-02T07:00:41.968598726Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:41.970542 containerd[1289]: time="2024-07-02T07:00:41.970502456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:41.971663 containerd[1289]: time="2024-07-02T07:00:41.971625623Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"32768601\" in 4.190026535s" Jul 2 07:00:41.971723 containerd[1289]: time="2024-07-02T07:00:41.971668764Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 07:00:41.996728 containerd[1289]: time="2024-07-02T07:00:41.996685499Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 07:00:44.829867 containerd[1289]: time="2024-07-02T07:00:44.829813305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:44.832071 containerd[1289]: time="2024-07-02T07:00:44.832023790Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=29588674" Jul 2 07:00:44.834553 containerd[1289]: time="2024-07-02T07:00:44.834508219Z" level=info msg="ImageCreate event name:\"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:44.837401 containerd[1289]: time="2024-07-02T07:00:44.837368293Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:44.841491 containerd[1289]: time="2024-07-02T07:00:44.841452412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:44.842882 containerd[1289]: time="2024-07-02T07:00:44.842826589Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"31138657\" in 2.846086298s" Jul 2 07:00:44.842882 containerd[1289]: time="2024-07-02T07:00:44.842883205Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 07:00:44.863642 containerd[1289]: time="2024-07-02T07:00:44.863586421Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 07:00:46.212278 containerd[1289]: time="2024-07-02T07:00:46.212166420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:46.213698 containerd[1289]: time="2024-07-02T07:00:46.213650363Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=17778120" Jul 2 07:00:46.215726 containerd[1289]: time="2024-07-02T07:00:46.215690920Z" level=info msg="ImageCreate event name:\"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:46.218998 containerd[1289]: time="2024-07-02T07:00:46.218963688Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:46.221325 containerd[1289]: time="2024-07-02T07:00:46.221288037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:46.222420 containerd[1289]: time="2024-07-02T07:00:46.222348165Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"19328121\" in 1.358704276s" Jul 2 07:00:46.222420 containerd[1289]: time="2024-07-02T07:00:46.222394692Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 07:00:46.244114 containerd[1289]: time="2024-07-02T07:00:46.244074980Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 07:00:47.416034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:00:47.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:47.416307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:00:47.419842 kernel: kauditd_printk_skb: 88 callbacks suppressed Jul 2 07:00:47.419962 kernel: audit: type=1130 audit(1719903647.415:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:47.419984 kernel: audit: type=1131 audit(1719903647.415:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:47.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:47.433523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:00:47.526616 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:00:47.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:47.530146 kernel: audit: type=1130 audit(1719903647.525:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:47.910541 kubelet[1679]: E0702 07:00:47.910393 1679 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:00:47.912268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:00:47.912386 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:00:47.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 07:00:47.916162 kernel: audit: type=1131 audit(1719903647.911:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 07:00:48.743370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168851492.mount: Deactivated successfully. Jul 2 07:00:50.123870 containerd[1289]: time="2024-07-02T07:00:50.123814026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:50.194379 containerd[1289]: time="2024-07-02T07:00:50.194276979Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=29035438" Jul 2 07:00:50.225745 containerd[1289]: time="2024-07-02T07:00:50.225682055Z" level=info msg="ImageCreate event name:\"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:50.230929 containerd[1289]: time="2024-07-02T07:00:50.230870785Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:50.234321 containerd[1289]: time="2024-07-02T07:00:50.234275070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:50.235117 containerd[1289]: time="2024-07-02T07:00:50.235050704Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"29034457\" in 3.990746594s" Jul 2 07:00:50.235117 containerd[1289]: time="2024-07-02T07:00:50.235115496Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 07:00:50.261199 containerd[1289]: time="2024-07-02T07:00:50.261146894Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 07:00:50.827045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281103058.mount: Deactivated successfully. Jul 2 07:00:52.060147 containerd[1289]: time="2024-07-02T07:00:52.060032266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:52.061558 containerd[1289]: time="2024-07-02T07:00:52.061514336Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 07:00:52.064085 containerd[1289]: time="2024-07-02T07:00:52.063941026Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:52.067009 containerd[1289]: time="2024-07-02T07:00:52.066970528Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:52.069652 containerd[1289]: time="2024-07-02T07:00:52.069598356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:52.070958 containerd[1289]: time="2024-07-02T07:00:52.070910527Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.809717236s" Jul 2 07:00:52.070958 containerd[1289]: time="2024-07-02T07:00:52.070946735Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 07:00:52.096776 containerd[1289]: time="2024-07-02T07:00:52.096729416Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:00:52.788825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776473176.mount: Deactivated successfully. Jul 2 07:00:52.895502 containerd[1289]: time="2024-07-02T07:00:52.895450183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:52.897288 containerd[1289]: time="2024-07-02T07:00:52.897212698Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 07:00:52.899428 containerd[1289]: time="2024-07-02T07:00:52.899369403Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:52.901441 containerd[1289]: time="2024-07-02T07:00:52.901383901Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:52.903270 containerd[1289]: time="2024-07-02T07:00:52.903228270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:52.904059 containerd[1289]: time="2024-07-02T07:00:52.903987644Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 807.207793ms" Jul 2 07:00:52.904107 containerd[1289]: time="2024-07-02T07:00:52.904054619Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:00:52.927668 containerd[1289]: time="2024-07-02T07:00:52.927622968Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 07:00:53.536690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2514039341.mount: Deactivated successfully. Jul 2 07:00:56.696161 containerd[1289]: time="2024-07-02T07:00:56.696067267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:56.698081 containerd[1289]: time="2024-07-02T07:00:56.698034887Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jul 2 07:00:56.699272 containerd[1289]: time="2024-07-02T07:00:56.699236570Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:56.701342 containerd[1289]: time="2024-07-02T07:00:56.701300781Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:56.703709 containerd[1289]: time="2024-07-02T07:00:56.703678060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:00:56.705014 containerd[1289]: time="2024-07-02T07:00:56.704978508Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.777307961s" Jul 2 07:00:56.705063 containerd[1289]: time="2024-07-02T07:00:56.705018854Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 07:00:58.163162 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 07:00:58.163338 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:00:58.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:58.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:58.168544 kernel: audit: type=1130 audit(1719903658.162:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:58.168587 kernel: audit: type=1131 audit(1719903658.162:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:58.174354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:00:58.267008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:00:58.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:58.270139 kernel: audit: type=1130 audit(1719903658.266:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:58.305760 kubelet[1884]: E0702 07:00:58.305670 1884 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:00:58.308578 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:00:58.308756 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:00:58.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 07:00:58.312172 kernel: audit: type=1131 audit(1719903658.307:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 07:00:58.785518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:00:58.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:58.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:58.790390 kernel: audit: type=1130 audit(1719903658.784:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:58.790428 kernel: audit: type=1131 audit(1719903658.786:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:58.797502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:00:58.815921 systemd[1]: Reloading. Jul 2 07:00:59.511158 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:00:59.585000 audit: BPF prog-id=41 op=LOAD Jul 2 07:00:59.585000 audit: BPF prog-id=42 op=LOAD Jul 2 07:00:59.588541 kernel: audit: type=1334 audit(1719903659.585:242): prog-id=41 op=LOAD Jul 2 07:00:59.588601 kernel: audit: type=1334 audit(1719903659.585:243): prog-id=42 op=LOAD Jul 2 07:00:59.588629 kernel: audit: type=1334 audit(1719903659.585:244): prog-id=27 op=UNLOAD Jul 2 07:00:59.585000 audit: BPF prog-id=27 op=UNLOAD Jul 2 07:00:59.589356 kernel: audit: type=1334 audit(1719903659.585:245): prog-id=28 op=UNLOAD Jul 2 07:00:59.585000 audit: BPF prog-id=28 op=UNLOAD Jul 2 07:00:59.586000 audit: BPF prog-id=43 op=LOAD Jul 2 07:00:59.586000 audit: BPF prog-id=36 op=UNLOAD Jul 2 07:00:59.587000 audit: BPF prog-id=44 op=LOAD Jul 2 07:00:59.587000 audit: BPF prog-id=38 op=UNLOAD Jul 2 07:00:59.587000 audit: BPF prog-id=45 op=LOAD Jul 2 07:00:59.587000 audit: BPF prog-id=46 op=LOAD Jul 2 07:00:59.587000 audit: BPF prog-id=39 op=UNLOAD Jul 2 07:00:59.587000 audit: BPF prog-id=40 op=UNLOAD Jul 2 07:00:59.588000 audit: BPF prog-id=47 op=LOAD Jul 2 07:00:59.588000 audit: BPF prog-id=29 op=UNLOAD Jul 2 07:00:59.588000 audit: BPF prog-id=48 op=LOAD Jul 2 07:00:59.588000 audit: BPF prog-id=49 op=LOAD Jul 2 07:00:59.588000 audit: BPF prog-id=30 op=UNLOAD Jul 2 07:00:59.588000 audit: BPF prog-id=31 op=UNLOAD Jul 2 07:00:59.590000 audit: BPF prog-id=50 op=LOAD Jul 2 07:00:59.590000 audit: BPF prog-id=32 op=UNLOAD Jul 2 07:00:59.590000 audit: BPF prog-id=51 op=LOAD Jul 2 07:00:59.590000 audit: BPF prog-id=52 op=LOAD Jul 2 07:00:59.590000 audit: BPF prog-id=33 op=UNLOAD Jul 2 07:00:59.590000 audit: BPF prog-id=34 op=UNLOAD Jul 2 07:00:59.591000 audit: BPF prog-id=53 op=LOAD Jul 2 07:00:59.591000 audit: BPF prog-id=35 op=UNLOAD Jul 2 07:00:59.592000 audit: BPF prog-id=54 op=LOAD Jul 2 07:00:59.592000 audit: BPF prog-id=37 op=UNLOAD Jul 2 07:00:59.618551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:00:59.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:59.620422 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:00:59.620726 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:00:59.620899 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:00:59.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:59.622906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:00:59.719872 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:00:59.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:00:59.937697 kubelet[1961]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:00:59.937697 kubelet[1961]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:00:59.937697 kubelet[1961]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:00:59.938775 kubelet[1961]: I0702 07:00:59.938728 1961 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:01:00.216143 kubelet[1961]: I0702 07:01:00.215977 1961 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 07:01:00.216143 kubelet[1961]: I0702 07:01:00.216009 1961 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:01:00.216341 kubelet[1961]: I0702 07:01:00.216236 1961 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 07:01:00.229815 kubelet[1961]: I0702 07:01:00.229782 1961 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:01:00.230549 kubelet[1961]: E0702 07:01:00.230529 1961 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:00.241849 kubelet[1961]: I0702 07:01:00.241817 1961 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:01:00.242848 kubelet[1961]: I0702 07:01:00.242790 1961 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:01:00.243617 kubelet[1961]: I0702 07:01:00.242955 1961 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:01:00.243974 kubelet[1961]: I0702 07:01:00.243956 1961 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:01:00.244017 kubelet[1961]: I0702 07:01:00.243976 1961 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:01:00.244817 kubelet[1961]: I0702 07:01:00.244799 1961 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:01:00.245537 kubelet[1961]: I0702 07:01:00.245517 1961 kubelet.go:400] "Attempting to sync node with API server" Jul 2 07:01:00.245575 kubelet[1961]: I0702 07:01:00.245538 1961 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:01:00.245575 kubelet[1961]: I0702 07:01:00.245570 1961 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:01:00.245624 kubelet[1961]: I0702 07:01:00.245583 1961 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:01:00.248216 kubelet[1961]: W0702 07:01:00.248115 1961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:00.248216 kubelet[1961]: E0702 07:01:00.248219 1961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:00.248216 kubelet[1961]: W0702 07:01:00.248157 1961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:00.248424 kubelet[1961]: E0702 07:01:00.248254 1961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:00.250034 kubelet[1961]: I0702 07:01:00.250016 1961 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 07:01:00.251327 kubelet[1961]: I0702 07:01:00.251314 1961 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:01:00.251391 kubelet[1961]: W0702 07:01:00.251356 1961 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:01:00.251809 kubelet[1961]: I0702 07:01:00.251795 1961 server.go:1264] "Started kubelet" Jul 2 07:01:00.251984 kubelet[1961]: I0702 07:01:00.251950 1961 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:01:00.252048 kubelet[1961]: I0702 07:01:00.252008 1961 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:01:00.252631 kubelet[1961]: I0702 07:01:00.252318 1961 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:01:00.253585 kubelet[1961]: I0702 07:01:00.253322 1961 server.go:455] "Adding debug handlers to kubelet server" Jul 2 07:01:00.255020 kubelet[1961]: I0702 07:01:00.254998 1961 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:01:00.258033 kubelet[1961]: E0702 07:01:00.255830 1961 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:01:00.258033 kubelet[1961]: I0702 07:01:00.255870 1961 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:01:00.258033 kubelet[1961]: E0702 07:01:00.256176 1961 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 07:01:00.258033 kubelet[1961]: W0702 07:01:00.256444 1961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:00.258033 kubelet[1961]: E0702 07:01:00.256479 1961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:00.258033 kubelet[1961]: E0702 07:01:00.256620 1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="200ms" Jul 2 07:01:00.258033 kubelet[1961]: I0702 07:01:00.256922 1961 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:01:00.258033 kubelet[1961]: I0702 07:01:00.257002 1961 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:01:00.258033 kubelet[1961]: I0702 07:01:00.257150 1961 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 07:01:00.258033 kubelet[1961]: I0702 07:01:00.257222 1961 reconciler.go:26] "Reconciler: start to sync state" Jul 2 07:01:00.258339 kubelet[1961]: E0702 07:01:00.257668 1961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.127:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.127:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de53443d337856 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 07:01:00.251773014 +0000 UTC m=+0.527500881,LastTimestamp:2024-07-02 07:01:00.251773014 +0000 UTC m=+0.527500881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 07:01:00.258339 kubelet[1961]: I0702 07:01:00.257975 1961 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:01:00.258000 audit[1973]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1973 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:00.258000 audit[1973]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff7a68f740 a2=0 a3=7fef8a4f2e90 items=0 ppid=1961 pid=1973 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:00.258000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 07:01:00.259000 audit[1974]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:00.259000 audit[1974]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc782e1870 a2=0 a3=7ff357e7be90 items=0 ppid=1961 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:00.259000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 07:01:00.261000 audit[1976]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:00.261000 audit[1976]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcb3553dd0 a2=0 a3=7fa7df385e90 items=0 ppid=1961 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:00.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 07:01:00.262000 audit[1978]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:00.262000 audit[1978]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdc48d75a0 a2=0 a3=7f315a0b5e90 items=0 ppid=1961 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:00.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 07:01:00.268894 kubelet[1961]: I0702 07:01:00.268873 1961 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:01:00.268894 kubelet[1961]: I0702 07:01:00.268888 1961 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:01:00.269007 kubelet[1961]: I0702 07:01:00.268901 1961 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:01:00.268000 audit[1983]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:00.268000 audit[1983]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc31088c10 a2=0 a3=7f064895be90 items=0 ppid=1961 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:00.268000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 2 07:01:00.270283 kubelet[1961]: I0702 07:01:00.270234 1961 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:01:00.269000 audit[1985]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:00.269000 audit[1985]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe6076f3c0 a2=0 a3=7f739dc79e90 items=0 ppid=1961 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:00.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 07:01:00.271185 kubelet[1961]: I0702 07:01:00.271159 1961 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:01:00.271185 kubelet[1961]: I0702 07:01:00.271184 1961 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:01:00.271242 kubelet[1961]: I0702 07:01:00.271207 1961 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 07:01:00.271290 kubelet[1961]: E0702 07:01:00.271254 1961 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:01:00.273393 kubelet[1961]: W0702 07:01:00.273357 1961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:00.273451 kubelet[1961]: E0702 07:01:00.273401 1961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:00.587000 audit[1989]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:00.587000 audit[1989]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeeec74ee0 a2=0 a3=7f8103788e90 items=0 ppid=1961 pid=1989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:00.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 07:01:00.587000 audit[1990]: NETFILTER_CFG table=mangle:33 family=2 entries=1 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:00.587000 audit[1990]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeea93f410 a2=0 a3=7f7314c53e90 items=0 ppid=1961 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:00.587000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 07:01:00.588000 audit[1992]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:00.588000 audit[1992]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffe3a5a850 a2=0 a3=7fc0c8e02e90 items=0 ppid=1961 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:00.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 07:01:00.588000 audit[1991]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:00.588000 audit[1991]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffe11f848e0 a2=0 a3=7f0935d3de90 items=0 ppid=1961 pid=1991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:00.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 07:01:00.589000 audit[1993]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:00.589000 audit[1993]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee4807550 a2=0 a3=7fae10da9e90 items=0 ppid=1961 pid=1993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:00.589000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 07:01:00.589000 audit[1994]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:00.589000 audit[1994]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe14a56f70 a2=0 a3=7f10c33d6e90 items=0 ppid=1961 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:00.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 07:01:00.591957 kubelet[1961]: I0702 07:01:00.587541 1961 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:01:00.591957 kubelet[1961]: E0702 07:01:00.590899 1961 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Jul 2 07:01:00.591957 kubelet[1961]: E0702 07:01:00.591002 1961 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:01:00.591957 kubelet[1961]: E0702 07:01:00.591243 1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="400ms" Jul 2 07:01:00.603939 kubelet[1961]: I0702 07:01:00.603750 1961 policy_none.go:49] "None policy: Start" Jul 2 07:01:00.604847 kubelet[1961]: I0702 07:01:00.604814 1961 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:01:00.604847 kubelet[1961]: I0702 07:01:00.604844 1961 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:01:00.623628 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 07:01:00.638798 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 07:01:00.641318 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 07:01:00.653745 kubelet[1961]: I0702 07:01:00.653694 1961 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:01:00.653989 kubelet[1961]: I0702 07:01:00.653878 1961 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 07:01:00.653989 kubelet[1961]: I0702 07:01:00.653988 1961 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:01:00.654872 kubelet[1961]: E0702 07:01:00.654861 1961 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 07:01:00.791619 kubelet[1961]: I0702 07:01:00.791565 1961 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:01:00.792652 kubelet[1961]: I0702 07:01:00.792628 1961 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:01:00.792896 kubelet[1961]: E0702 07:01:00.792872 1961 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Jul 2 07:01:00.793118 kubelet[1961]: I0702 07:01:00.793090 1961 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:01:00.794107 kubelet[1961]: I0702 07:01:00.794077 1961 topology_manager.go:215] "Topology Admit Handler" podUID="1f2d928b633ae88c40742d4cbc0cec00" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:01:00.798047 systemd[1]: Created slice kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice - libcontainer container kubepods-burstable-podfd87124bd1ab6d9b01dedf07aaa171f7.slice. Jul 2 07:01:00.806616 systemd[1]: Created slice kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice - libcontainer container kubepods-burstable-pod5df30d679156d9b860331584e2d47675.slice. Jul 2 07:01:00.818571 systemd[1]: Created slice kubepods-burstable-pod1f2d928b633ae88c40742d4cbc0cec00.slice - libcontainer container kubepods-burstable-pod1f2d928b633ae88c40742d4cbc0cec00.slice. Jul 2 07:01:00.890820 kubelet[1961]: I0702 07:01:00.890151 1961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f2d928b633ae88c40742d4cbc0cec00-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1f2d928b633ae88c40742d4cbc0cec00\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:01:00.890820 kubelet[1961]: I0702 07:01:00.890187 1961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:01:00.890820 kubelet[1961]: I0702 07:01:00.890215 1961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:01:00.890820 kubelet[1961]: I0702 07:01:00.890230 1961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f2d928b633ae88c40742d4cbc0cec00-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1f2d928b633ae88c40742d4cbc0cec00\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:01:00.890820 kubelet[1961]: I0702 07:01:00.890248 1961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f2d928b633ae88c40742d4cbc0cec00-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1f2d928b633ae88c40742d4cbc0cec00\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:01:00.891053 kubelet[1961]: I0702 07:01:00.890269 1961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:01:00.891053 kubelet[1961]: I0702 07:01:00.890291 1961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:01:00.891053 kubelet[1961]: I0702 07:01:00.890308 1961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:01:00.891053 kubelet[1961]: I0702 07:01:00.890328 1961 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:01:00.991898 kubelet[1961]: E0702 07:01:00.991857 1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="800ms" Jul 2 07:01:01.032261 kubelet[1961]: E0702 07:01:01.032174 1961 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.127:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.127:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de53443d337856 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 07:01:00.251773014 +0000 UTC m=+0.527500881,LastTimestamp:2024-07-02 07:01:00.251773014 +0000 UTC m=+0.527500881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 07:01:01.105833 kubelet[1961]: E0702 07:01:01.105809 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:01.106473 containerd[1289]: time="2024-07-02T07:01:01.106426545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,}" Jul 2 07:01:01.117540 kubelet[1961]: E0702 07:01:01.117504 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:01.117842 containerd[1289]: time="2024-07-02T07:01:01.117817581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,}" Jul 2 07:01:01.121088 kubelet[1961]: E0702 07:01:01.121072 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:01.121348 containerd[1289]: time="2024-07-02T07:01:01.121319924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1f2d928b633ae88c40742d4cbc0cec00,Namespace:kube-system,Attempt:0,}" Jul 2 07:01:01.194236 kubelet[1961]: I0702 07:01:01.194112 1961 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:01:01.194440 kubelet[1961]: E0702 07:01:01.194413 1961 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Jul 2 07:01:01.354209 kubelet[1961]: W0702 07:01:01.354140 1961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:01.354209 kubelet[1961]: E0702 07:01:01.354205 1961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.127:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:01.374840 kubelet[1961]: W0702 07:01:01.374785 1961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:01.374840 kubelet[1961]: E0702 07:01:01.374828 1961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.127:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:01.462823 kubelet[1961]: W0702 07:01:01.462644 1961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:01.462823 kubelet[1961]: E0702 07:01:01.462693 1961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.127:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:01.510249 kubelet[1961]: W0702 07:01:01.510194 1961 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:01.510249 kubelet[1961]: E0702 07:01:01.510235 1961 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.127:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:01.792370 kubelet[1961]: E0702 07:01:01.792270 1961 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.127:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.127:6443: connect: connection refused" interval="1.6s" Jul 2 07:01:01.957738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2423211424.mount: Deactivated successfully. Jul 2 07:01:01.968137 containerd[1289]: time="2024-07-02T07:01:01.968089211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:01:01.969283 containerd[1289]: time="2024-07-02T07:01:01.969219805Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:01:01.970322 containerd[1289]: time="2024-07-02T07:01:01.970284892Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:01:01.971013 containerd[1289]: time="2024-07-02T07:01:01.970933229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 07:01:01.971917 containerd[1289]: time="2024-07-02T07:01:01.971873426Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 07:01:01.972747 containerd[1289]: time="2024-07-02T07:01:01.972719141Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 07:01:01.973602 containerd[1289]: time="2024-07-02T07:01:01.973573825Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:01:01.974917 containerd[1289]: time="2024-07-02T07:01:01.974887098Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:01:01.976153 containerd[1289]: time="2024-07-02T07:01:01.976103076Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:01:01.977563 containerd[1289]: time="2024-07-02T07:01:01.977527273Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:01:01.978819 containerd[1289]: time="2024-07-02T07:01:01.978789841Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:01:01.981004 containerd[1289]: time="2024-07-02T07:01:01.980967356Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:01:01.981945 containerd[1289]: time="2024-07-02T07:01:01.981918224Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 860.533786ms" Jul 2 07:01:01.982845 containerd[1289]: time="2024-07-02T07:01:01.982813024Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 864.924176ms" Jul 2 07:01:01.983300 containerd[1289]: time="2024-07-02T07:01:01.983269281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:01:01.984352 containerd[1289]: time="2024-07-02T07:01:01.984316504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 877.782482ms" Jul 2 07:01:01.985013 containerd[1289]: time="2024-07-02T07:01:01.984953709Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:01:01.985767 containerd[1289]: time="2024-07-02T07:01:01.985738548Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:01:01.995685 kubelet[1961]: I0702 07:01:01.995643 1961 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:01:01.996102 kubelet[1961]: E0702 07:01:01.996064 1961 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.127:6443/api/v1/nodes\": dial tcp 10.0.0.127:6443: connect: connection refused" node="localhost" Jul 2 07:01:02.199069 containerd[1289]: time="2024-07-02T07:01:02.193280469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:01:02.199069 containerd[1289]: time="2024-07-02T07:01:02.193331356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:02.199069 containerd[1289]: time="2024-07-02T07:01:02.193345052Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:01:02.199069 containerd[1289]: time="2024-07-02T07:01:02.193353839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:02.200746 containerd[1289]: time="2024-07-02T07:01:02.200527297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:01:02.201157 containerd[1289]: time="2024-07-02T07:01:02.200973153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:02.202029 containerd[1289]: time="2024-07-02T07:01:02.201950989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:01:02.202385 containerd[1289]: time="2024-07-02T07:01:02.202009602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:02.236735 containerd[1289]: time="2024-07-02T07:01:02.236597096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:01:02.236997 containerd[1289]: time="2024-07-02T07:01:02.236704582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:02.236997 containerd[1289]: time="2024-07-02T07:01:02.236731875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:01:02.236997 containerd[1289]: time="2024-07-02T07:01:02.236750570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:02.243346 systemd[1]: Started cri-containerd-af04ebacfacefd06a968a165b60d0319edbee3b94a3a8130c0186634ab940cf1.scope - libcontainer container af04ebacfacefd06a968a165b60d0319edbee3b94a3a8130c0186634ab940cf1. Jul 2 07:01:02.246380 systemd[1]: Started cri-containerd-089bf6c7931f362833e6e51a4e2e64a67210cd27ea2210312cd1f462c8542b9c.scope - libcontainer container 089bf6c7931f362833e6e51a4e2e64a67210cd27ea2210312cd1f462c8542b9c. Jul 2 07:01:02.261000 audit: BPF prog-id=55 op=LOAD Jul 2 07:01:02.262000 audit: BPF prog-id=56 op=LOAD Jul 2 07:01:02.262000 audit[2055]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2023 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166303465626163666163656664303661393638613136356236306430 Jul 2 07:01:02.262000 audit: BPF prog-id=57 op=LOAD Jul 2 07:01:02.262000 audit[2055]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2023 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166303465626163666163656664303661393638613136356236306430 Jul 2 07:01:02.262000 audit: BPF prog-id=57 op=UNLOAD Jul 2 07:01:02.262000 audit: BPF prog-id=56 op=UNLOAD Jul 2 07:01:02.262000 audit: BPF prog-id=58 op=LOAD Jul 2 07:01:02.262000 audit[2055]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2023 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.262000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166303465626163666163656664303661393638613136356236306430 Jul 2 07:01:02.283000 audit: BPF prog-id=59 op=LOAD Jul 2 07:01:02.284000 audit: BPF prog-id=60 op=LOAD Jul 2 07:01:02.284000 audit[2048]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2025 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038396266366337393331663336323833336536653531613465326536 Jul 2 07:01:02.284000 audit: BPF prog-id=61 op=LOAD Jul 2 07:01:02.284000 audit[2048]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2025 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038396266366337393331663336323833336536653531613465326536 Jul 2 07:01:02.284000 audit: BPF prog-id=61 op=UNLOAD Jul 2 07:01:02.284000 audit: BPF prog-id=60 op=UNLOAD Jul 2 07:01:02.284000 audit: BPF prog-id=62 op=LOAD Jul 2 07:01:02.284000 audit[2048]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2025 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038396266366337393331663336323833336536653531613465326536 Jul 2 07:01:02.309283 systemd[1]: Started cri-containerd-f6de6071a14bb54c8e9c9c4de726e0b9fc348143a67811b3fbcbefbb656fc23f.scope - libcontainer container f6de6071a14bb54c8e9c9c4de726e0b9fc348143a67811b3fbcbefbb656fc23f. Jul 2 07:01:02.325000 audit: BPF prog-id=63 op=LOAD Jul 2 07:01:02.327517 containerd[1289]: time="2024-07-02T07:01:02.327484947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fd87124bd1ab6d9b01dedf07aaa171f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"af04ebacfacefd06a968a165b60d0319edbee3b94a3a8130c0186634ab940cf1\"" Jul 2 07:01:02.327580 kubelet[1961]: E0702 07:01:02.327539 1961 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.127:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.127:6443: connect: connection refused Jul 2 07:01:02.326000 audit: BPF prog-id=64 op=LOAD Jul 2 07:01:02.326000 audit[2073]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2022 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636646536303731613134626235346338653963396334646537323665 Jul 2 07:01:02.326000 audit: BPF prog-id=65 op=LOAD Jul 2 07:01:02.326000 audit[2073]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2022 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636646536303731613134626235346338653963396334646537323665 Jul 2 07:01:02.326000 audit: BPF prog-id=65 op=UNLOAD Jul 2 07:01:02.326000 audit: BPF prog-id=64 op=UNLOAD Jul 2 07:01:02.327000 audit: BPF prog-id=66 op=LOAD Jul 2 07:01:02.327000 audit[2073]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2022 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636646536303731613134626235346338653963396334646537323665 Jul 2 07:01:02.329675 kubelet[1961]: E0702 07:01:02.329343 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:02.332443 containerd[1289]: time="2024-07-02T07:01:02.332409008Z" level=info msg="CreateContainer within sandbox \"af04ebacfacefd06a968a165b60d0319edbee3b94a3a8130c0186634ab940cf1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:01:02.335058 containerd[1289]: time="2024-07-02T07:01:02.334915969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5df30d679156d9b860331584e2d47675,Namespace:kube-system,Attempt:0,} returns sandbox id \"089bf6c7931f362833e6e51a4e2e64a67210cd27ea2210312cd1f462c8542b9c\"" Jul 2 07:01:02.336372 kubelet[1961]: E0702 07:01:02.336103 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:02.338455 containerd[1289]: time="2024-07-02T07:01:02.338408452Z" level=info msg="CreateContainer within sandbox \"089bf6c7931f362833e6e51a4e2e64a67210cd27ea2210312cd1f462c8542b9c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:01:02.361537 containerd[1289]: time="2024-07-02T07:01:02.361476387Z" level=info msg="CreateContainer within sandbox \"089bf6c7931f362833e6e51a4e2e64a67210cd27ea2210312cd1f462c8542b9c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"79fda33c3d36fc8a46af227e2fe7fe24d903bcd8622356d89384c1cfdc2c9932\"" Jul 2 07:01:02.362240 containerd[1289]: time="2024-07-02T07:01:02.362207470Z" level=info msg="CreateContainer within sandbox \"af04ebacfacefd06a968a165b60d0319edbee3b94a3a8130c0186634ab940cf1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fe1b5e61f5b9ac9d49244fa4dd782b4b9818dac928c878feeafd763ed38175f5\"" Jul 2 07:01:02.362714 containerd[1289]: time="2024-07-02T07:01:02.362682993Z" level=info msg="StartContainer for \"79fda33c3d36fc8a46af227e2fe7fe24d903bcd8622356d89384c1cfdc2c9932\"" Jul 2 07:01:02.362900 containerd[1289]: time="2024-07-02T07:01:02.362703352Z" level=info msg="StartContainer for \"fe1b5e61f5b9ac9d49244fa4dd782b4b9818dac928c878feeafd763ed38175f5\"" Jul 2 07:01:02.364917 containerd[1289]: time="2024-07-02T07:01:02.364887865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1f2d928b633ae88c40742d4cbc0cec00,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6de6071a14bb54c8e9c9c4de726e0b9fc348143a67811b3fbcbefbb656fc23f\"" Jul 2 07:01:02.366167 kubelet[1961]: E0702 07:01:02.365970 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:02.368174 containerd[1289]: time="2024-07-02T07:01:02.368145757Z" level=info msg="CreateContainer within sandbox \"f6de6071a14bb54c8e9c9c4de726e0b9fc348143a67811b3fbcbefbb656fc23f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:01:02.388271 systemd[1]: Started cri-containerd-79fda33c3d36fc8a46af227e2fe7fe24d903bcd8622356d89384c1cfdc2c9932.scope - libcontainer container 79fda33c3d36fc8a46af227e2fe7fe24d903bcd8622356d89384c1cfdc2c9932. Jul 2 07:01:02.388605 containerd[1289]: time="2024-07-02T07:01:02.388569797Z" level=info msg="CreateContainer within sandbox \"f6de6071a14bb54c8e9c9c4de726e0b9fc348143a67811b3fbcbefbb656fc23f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0cd035295f4eb0018f21161ee2bba8206341b2da6dff0649141aab427e612729\"" Jul 2 07:01:02.389183 containerd[1289]: time="2024-07-02T07:01:02.389167024Z" level=info msg="StartContainer for \"0cd035295f4eb0018f21161ee2bba8206341b2da6dff0649141aab427e612729\"" Jul 2 07:01:02.390964 systemd[1]: Started cri-containerd-fe1b5e61f5b9ac9d49244fa4dd782b4b9818dac928c878feeafd763ed38175f5.scope - libcontainer container fe1b5e61f5b9ac9d49244fa4dd782b4b9818dac928c878feeafd763ed38175f5. Jul 2 07:01:02.400000 audit: BPF prog-id=67 op=LOAD Jul 2 07:01:02.400000 audit: BPF prog-id=68 op=LOAD Jul 2 07:01:02.400000 audit[2144]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=2025 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666461333363336433366663386134366166323237653266653766 Jul 2 07:01:02.400000 audit: BPF prog-id=69 op=LOAD Jul 2 07:01:02.400000 audit[2144]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=2025 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666461333363336433366663386134366166323237653266653766 Jul 2 07:01:02.400000 audit: BPF prog-id=69 op=UNLOAD Jul 2 07:01:02.400000 audit: BPF prog-id=68 op=UNLOAD Jul 2 07:01:02.400000 audit: BPF prog-id=70 op=LOAD Jul 2 07:01:02.400000 audit[2144]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=2025 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.400000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739666461333363336433366663386134366166323237653266653766 Jul 2 07:01:02.401000 audit: BPF prog-id=71 op=LOAD Jul 2 07:01:02.402000 audit: BPF prog-id=72 op=LOAD Jul 2 07:01:02.402000 audit[2145]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000125988 a2=78 a3=0 items=0 ppid=2023 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665316235653631663562396163396434393234346661346464373832 Jul 2 07:01:02.402000 audit: BPF prog-id=73 op=LOAD Jul 2 07:01:02.402000 audit[2145]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000125720 a2=78 a3=0 items=0 ppid=2023 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665316235653631663562396163396434393234346661346464373832 Jul 2 07:01:02.402000 audit: BPF prog-id=73 op=UNLOAD Jul 2 07:01:02.402000 audit: BPF prog-id=72 op=UNLOAD Jul 2 07:01:02.402000 audit: BPF prog-id=74 op=LOAD Jul 2 07:01:02.402000 audit[2145]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000125be0 a2=78 a3=0 items=0 ppid=2023 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665316235653631663562396163396434393234346661346464373832 Jul 2 07:01:02.421513 systemd[1]: Started cri-containerd-0cd035295f4eb0018f21161ee2bba8206341b2da6dff0649141aab427e612729.scope - libcontainer container 0cd035295f4eb0018f21161ee2bba8206341b2da6dff0649141aab427e612729. Jul 2 07:01:02.432000 audit: BPF prog-id=75 op=LOAD Jul 2 07:01:02.433000 audit: BPF prog-id=76 op=LOAD Jul 2 07:01:02.433000 audit[2189]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2022 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643033353239356634656230303138663231313631656532626261 Jul 2 07:01:02.433000 audit: BPF prog-id=77 op=LOAD Jul 2 07:01:02.433000 audit[2189]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2022 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643033353239356634656230303138663231313631656532626261 Jul 2 07:01:02.433000 audit: BPF prog-id=77 op=UNLOAD Jul 2 07:01:02.433000 audit: BPF prog-id=76 op=UNLOAD Jul 2 07:01:02.433000 audit: BPF prog-id=78 op=LOAD Jul 2 07:01:02.433000 audit[2189]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2022 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:02.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063643033353239356634656230303138663231313631656532626261 Jul 2 07:01:02.449683 containerd[1289]: time="2024-07-02T07:01:02.449531621Z" level=info msg="StartContainer for \"79fda33c3d36fc8a46af227e2fe7fe24d903bcd8622356d89384c1cfdc2c9932\" returns successfully" Jul 2 07:01:02.449683 containerd[1289]: time="2024-07-02T07:01:02.449649358Z" level=info msg="StartContainer for \"fe1b5e61f5b9ac9d49244fa4dd782b4b9818dac928c878feeafd763ed38175f5\" returns successfully" Jul 2 07:01:02.464625 containerd[1289]: time="2024-07-02T07:01:02.464572580Z" level=info msg="StartContainer for \"0cd035295f4eb0018f21161ee2bba8206341b2da6dff0649141aab427e612729\" returns successfully" Jul 2 07:01:03.189917 kernel: kauditd_printk_skb: 135 callbacks suppressed Jul 2 07:01:03.190086 kernel: audit: type=1400 audit(1719903663.183:321): avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:03.190110 kernel: audit: type=1300 audit(1719903663.183:321): arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0004049f0 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:01:03.183000 audit[2175]: AVC avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:03.183000 audit[2175]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0004049f0 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:01:03.194579 kernel: audit: type=1327 audit(1719903663.183:321): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:01:03.183000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:01:03.184000 audit[2175]: AVC avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:03.201584 kernel: audit: type=1400 audit(1719903663.184:322): avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:03.201610 kernel: audit: type=1300 audit(1719903663.184:322): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001108020 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:01:03.184000 audit[2175]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c001108020 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:01:03.184000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:01:03.209661 kernel: audit: type=1327 audit(1719903663.184:322): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:01:03.282297 kubelet[1961]: E0702 07:01:03.282265 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:03.283637 kubelet[1961]: E0702 07:01:03.283604 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:03.285465 kubelet[1961]: E0702 07:01:03.285428 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:03.597875 kubelet[1961]: I0702 07:01:03.597759 1961 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:01:04.003000 audit[2201]: AVC avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:04.021168 kernel: audit: type=1400 audit(1719903664.003:323): avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:04.003000 audit[2201]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c0043357d0 a2=fc6 a3=0 items=0 ppid=2022 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c515,c977 key=(null) Jul 2 07:01:04.003000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313237002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jul 2 07:01:04.035563 kernel: audit: type=1300 audit(1719903664.003:323): arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c0043357d0 a2=fc6 a3=0 items=0 ppid=2022 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c515,c977 key=(null) Jul 2 07:01:04.035707 kernel: audit: type=1327 audit(1719903664.003:323): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313237002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jul 2 07:01:04.003000 audit[2201]: AVC avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:04.003000 audit[2201]: AVC avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7757 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:04.003000 audit[2201]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c004335860 a2=fc6 a3=0 items=0 ppid=2022 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c515,c977 key=(null) Jul 2 07:01:04.003000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313237002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jul 2 07:01:04.003000 audit[2201]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=43 a1=c0034e5c40 a2=fc6 a3=0 items=0 ppid=2022 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c515,c977 key=(null) Jul 2 07:01:04.003000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313237002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jul 2 07:01:04.020000 audit[2201]: AVC avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7763 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:04.020000 audit[2201]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=45 a1=c00311cc00 a2=fc6 a3=0 items=0 ppid=2022 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c515,c977 key=(null) Jul 2 07:01:04.020000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313237002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jul 2 07:01:04.021000 audit[2201]: AVC avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:04.021000 audit[2201]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=45 a1=c001a21c80 a2=fc6 a3=0 items=0 ppid=2022 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c515,c977 key=(null) Jul 2 07:01:04.043160 kernel: audit: type=1400 audit(1719903664.003:324): avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:04.021000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313237002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jul 2 07:01:04.021000 audit[2201]: AVC avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:04.021000 audit[2201]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=45 a1=c00311ccf0 a2=fc6 a3=0 items=0 ppid=2022 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c515,c977 key=(null) Jul 2 07:01:04.021000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313237002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jul 2 07:01:04.047277 kubelet[1961]: E0702 07:01:04.047249 1961 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 07:01:04.143673 kubelet[1961]: I0702 07:01:04.143617 1961 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 07:01:04.250674 kubelet[1961]: I0702 07:01:04.250621 1961 apiserver.go:52] "Watching apiserver" Jul 2 07:01:04.257743 kubelet[1961]: I0702 07:01:04.257720 1961 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 07:01:04.379292 kubelet[1961]: E0702 07:01:04.379164 1961 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 2 07:01:04.379634 kubelet[1961]: E0702 07:01:04.379176 1961 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 2 07:01:04.379698 kubelet[1961]: E0702 07:01:04.379689 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:04.380409 kubelet[1961]: E0702 07:01:04.380242 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:05.290287 kubelet[1961]: E0702 07:01:05.290259 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:05.547781 kubelet[1961]: E0702 07:01:05.547661 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:06.180988 systemd[1]: Reloading. Jul 2 07:01:06.288344 kubelet[1961]: E0702 07:01:06.288310 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:06.288564 kubelet[1961]: E0702 07:01:06.288550 1961 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:06.510000 audit[2175]: AVC avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:06.510000 audit[2175]: AVC avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:06.510000 audit[2175]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000b82400 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:01:06.510000 audit[2175]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000c96520 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:01:06.510000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:01:06.510000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:01:06.511000 audit[2175]: AVC avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:06.511000 audit[2175]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000b825c0 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:01:06.511000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:01:06.511000 audit[2175]: AVC avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:01:06.511000 audit[2175]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c000d765e0 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:01:06.511000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:01:06.560603 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:01:06.637000 audit: BPF prog-id=79 op=LOAD Jul 2 07:01:06.637000 audit: BPF prog-id=71 op=UNLOAD Jul 2 07:01:06.637000 audit: BPF prog-id=80 op=LOAD Jul 2 07:01:06.637000 audit: BPF prog-id=63 op=UNLOAD Jul 2 07:01:06.638000 audit: BPF prog-id=81 op=LOAD Jul 2 07:01:06.638000 audit: BPF prog-id=75 op=UNLOAD Jul 2 07:01:06.639000 audit: BPF prog-id=82 op=LOAD Jul 2 07:01:06.639000 audit: BPF prog-id=83 op=LOAD Jul 2 07:01:06.639000 audit: BPF prog-id=41 op=UNLOAD Jul 2 07:01:06.639000 audit: BPF prog-id=42 op=UNLOAD Jul 2 07:01:06.640000 audit: BPF prog-id=84 op=LOAD Jul 2 07:01:06.640000 audit: BPF prog-id=43 op=UNLOAD Jul 2 07:01:06.641000 audit: BPF prog-id=85 op=LOAD Jul 2 07:01:06.641000 audit: BPF prog-id=44 op=UNLOAD Jul 2 07:01:06.641000 audit: BPF prog-id=86 op=LOAD Jul 2 07:01:06.641000 audit: BPF prog-id=87 op=LOAD Jul 2 07:01:06.641000 audit: BPF prog-id=45 op=UNLOAD Jul 2 07:01:06.641000 audit: BPF prog-id=46 op=UNLOAD Jul 2 07:01:06.642000 audit: BPF prog-id=88 op=LOAD Jul 2 07:01:06.642000 audit: BPF prog-id=55 op=UNLOAD Jul 2 07:01:06.642000 audit: BPF prog-id=89 op=LOAD Jul 2 07:01:06.642000 audit: BPF prog-id=47 op=UNLOAD Jul 2 07:01:06.643000 audit: BPF prog-id=90 op=LOAD Jul 2 07:01:06.643000 audit: BPF prog-id=91 op=LOAD Jul 2 07:01:06.643000 audit: BPF prog-id=48 op=UNLOAD Jul 2 07:01:06.643000 audit: BPF prog-id=49 op=UNLOAD Jul 2 07:01:06.644000 audit: BPF prog-id=92 op=LOAD Jul 2 07:01:06.644000 audit: BPF prog-id=67 op=UNLOAD Jul 2 07:01:06.644000 audit: BPF prog-id=93 op=LOAD Jul 2 07:01:06.644000 audit: BPF prog-id=59 op=UNLOAD Jul 2 07:01:06.645000 audit: BPF prog-id=94 op=LOAD Jul 2 07:01:06.645000 audit: BPF prog-id=50 op=UNLOAD Jul 2 07:01:06.645000 audit: BPF prog-id=95 op=LOAD Jul 2 07:01:06.645000 audit: BPF prog-id=96 op=LOAD Jul 2 07:01:06.645000 audit: BPF prog-id=51 op=UNLOAD Jul 2 07:01:06.645000 audit: BPF prog-id=52 op=UNLOAD Jul 2 07:01:06.646000 audit: BPF prog-id=97 op=LOAD Jul 2 07:01:06.646000 audit: BPF prog-id=53 op=UNLOAD Jul 2 07:01:06.646000 audit: BPF prog-id=98 op=LOAD Jul 2 07:01:06.646000 audit: BPF prog-id=54 op=UNLOAD Jul 2 07:01:06.658085 kubelet[1961]: I0702 07:01:06.658055 1961 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:01:06.658179 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:01:06.675364 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:01:06.675523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:01:06.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:06.686602 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:01:06.780848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:01:06.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:06.825469 kubelet[2313]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:01:06.825469 kubelet[2313]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:01:06.825469 kubelet[2313]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:01:06.825813 kubelet[2313]: I0702 07:01:06.825513 2313 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:01:06.830116 kubelet[2313]: I0702 07:01:06.830084 2313 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 07:01:06.830116 kubelet[2313]: I0702 07:01:06.830107 2313 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:01:06.830328 kubelet[2313]: I0702 07:01:06.830304 2313 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 07:01:06.831585 kubelet[2313]: I0702 07:01:06.831567 2313 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:01:06.832639 kubelet[2313]: I0702 07:01:06.832613 2313 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:01:06.838588 kubelet[2313]: I0702 07:01:06.838560 2313 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:01:06.838764 kubelet[2313]: I0702 07:01:06.838731 2313 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:01:06.838915 kubelet[2313]: I0702 07:01:06.838764 2313 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:01:06.838998 kubelet[2313]: I0702 07:01:06.838925 2313 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:01:06.838998 kubelet[2313]: I0702 07:01:06.838933 2313 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:01:06.838998 kubelet[2313]: I0702 07:01:06.838965 2313 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:01:06.839068 kubelet[2313]: I0702 07:01:06.839053 2313 kubelet.go:400] "Attempting to sync node with API server" Jul 2 07:01:06.839068 kubelet[2313]: I0702 07:01:06.839064 2313 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:01:06.839107 kubelet[2313]: I0702 07:01:06.839079 2313 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:01:06.839107 kubelet[2313]: I0702 07:01:06.839090 2313 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:01:06.839822 kubelet[2313]: I0702 07:01:06.839811 2313 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 07:01:06.840001 kubelet[2313]: I0702 07:01:06.839984 2313 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:01:06.842390 kubelet[2313]: I0702 07:01:06.840421 2313 server.go:1264] "Started kubelet" Jul 2 07:01:06.842390 kubelet[2313]: I0702 07:01:06.840694 2313 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:01:06.842390 kubelet[2313]: I0702 07:01:06.840824 2313 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:01:06.842390 kubelet[2313]: I0702 07:01:06.840890 2313 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:01:06.842390 kubelet[2313]: I0702 07:01:06.842277 2313 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:01:06.843024 kubelet[2313]: I0702 07:01:06.843013 2313 server.go:455] "Adding debug handlers to kubelet server" Jul 2 07:01:06.847620 kubelet[2313]: I0702 07:01:06.847599 2313 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:01:06.848278 kubelet[2313]: I0702 07:01:06.848264 2313 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 07:01:06.848493 kubelet[2313]: I0702 07:01:06.848482 2313 reconciler.go:26] "Reconciler: start to sync state" Jul 2 07:01:06.852780 kubelet[2313]: I0702 07:01:06.852761 2313 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:01:06.852979 kubelet[2313]: I0702 07:01:06.852963 2313 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:01:06.853640 kubelet[2313]: E0702 07:01:06.853620 2313 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:01:06.854994 kubelet[2313]: I0702 07:01:06.854978 2313 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:01:06.861436 kubelet[2313]: I0702 07:01:06.861403 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:01:06.862212 kubelet[2313]: I0702 07:01:06.862195 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:01:06.862247 kubelet[2313]: I0702 07:01:06.862217 2313 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:01:06.862247 kubelet[2313]: I0702 07:01:06.862232 2313 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 07:01:06.862304 kubelet[2313]: E0702 07:01:06.862265 2313 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:01:06.883792 kubelet[2313]: I0702 07:01:06.883768 2313 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:01:06.883792 kubelet[2313]: I0702 07:01:06.883783 2313 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:01:06.883792 kubelet[2313]: I0702 07:01:06.883798 2313 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:01:06.883977 kubelet[2313]: I0702 07:01:06.883927 2313 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:01:06.883977 kubelet[2313]: I0702 07:01:06.883937 2313 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:01:06.883977 kubelet[2313]: I0702 07:01:06.883953 2313 policy_none.go:49] "None policy: Start" Jul 2 07:01:06.884433 kubelet[2313]: I0702 07:01:06.884419 2313 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:01:06.884478 kubelet[2313]: I0702 07:01:06.884442 2313 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:01:06.884598 kubelet[2313]: I0702 07:01:06.884587 2313 state_mem.go:75] "Updated machine memory state" Jul 2 07:01:06.887761 kubelet[2313]: I0702 07:01:06.887748 2313 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:01:06.887908 kubelet[2313]: I0702 07:01:06.887875 2313 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 07:01:06.887971 kubelet[2313]: I0702 07:01:06.887962 2313 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:01:06.951391 kubelet[2313]: I0702 07:01:06.951356 2313 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 07:01:06.957169 kubelet[2313]: I0702 07:01:06.957146 2313 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 07:01:06.957268 kubelet[2313]: I0702 07:01:06.957216 2313 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 07:01:06.962627 kubelet[2313]: I0702 07:01:06.962593 2313 topology_manager.go:215] "Topology Admit Handler" podUID="5df30d679156d9b860331584e2d47675" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 07:01:06.962737 kubelet[2313]: I0702 07:01:06.962668 2313 topology_manager.go:215] "Topology Admit Handler" podUID="1f2d928b633ae88c40742d4cbc0cec00" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 07:01:06.962737 kubelet[2313]: I0702 07:01:06.962712 2313 topology_manager.go:215] "Topology Admit Handler" podUID="fd87124bd1ab6d9b01dedf07aaa171f7" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 07:01:06.970294 kubelet[2313]: E0702 07:01:06.970268 2313 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 07:01:06.970384 kubelet[2313]: E0702 07:01:06.970324 2313 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 2 07:01:07.049551 kubelet[2313]: I0702 07:01:07.048920 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1f2d928b633ae88c40742d4cbc0cec00-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1f2d928b633ae88c40742d4cbc0cec00\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:01:07.049551 kubelet[2313]: I0702 07:01:07.048958 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1f2d928b633ae88c40742d4cbc0cec00-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1f2d928b633ae88c40742d4cbc0cec00\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:01:07.049551 kubelet[2313]: I0702 07:01:07.048979 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:01:07.049551 kubelet[2313]: I0702 07:01:07.048996 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:01:07.049551 kubelet[2313]: I0702 07:01:07.049011 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:01:07.049793 kubelet[2313]: I0702 07:01:07.049041 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:01:07.049793 kubelet[2313]: I0702 07:01:07.049099 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5df30d679156d9b860331584e2d47675-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5df30d679156d9b860331584e2d47675\") " pod="kube-system/kube-scheduler-localhost" Jul 2 07:01:07.049793 kubelet[2313]: I0702 07:01:07.049161 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1f2d928b633ae88c40742d4cbc0cec00-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1f2d928b633ae88c40742d4cbc0cec00\") " pod="kube-system/kube-apiserver-localhost" Jul 2 07:01:07.049793 kubelet[2313]: I0702 07:01:07.049187 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fd87124bd1ab6d9b01dedf07aaa171f7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fd87124bd1ab6d9b01dedf07aaa171f7\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 07:01:07.266996 kubelet[2313]: E0702 07:01:07.266949 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:07.270829 kubelet[2313]: E0702 07:01:07.270790 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:07.270914 kubelet[2313]: E0702 07:01:07.270874 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:07.839872 kubelet[2313]: I0702 07:01:07.839824 2313 apiserver.go:52] "Watching apiserver" Jul 2 07:01:07.849076 kubelet[2313]: I0702 07:01:07.849042 2313 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 07:01:07.872819 kubelet[2313]: E0702 07:01:07.872801 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:08.304340 kubelet[2313]: E0702 07:01:08.304305 2313 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 07:01:08.304952 kubelet[2313]: E0702 07:01:08.304911 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:08.541284 kubelet[2313]: E0702 07:01:08.541246 2313 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 2 07:01:08.541820 kubelet[2313]: E0702 07:01:08.541803 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:08.555980 kubelet[2313]: I0702 07:01:08.555855 2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.5558351889999997 podStartE2EDuration="3.555835189s" podCreationTimestamp="2024-07-02 07:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:01:08.555773882 +0000 UTC m=+1.770761821" watchObservedRunningTime="2024-07-02 07:01:08.555835189 +0000 UTC m=+1.770823128" Jul 2 07:01:08.555980 kubelet[2313]: I0702 07:01:08.555971 2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.555967171 podStartE2EDuration="2.555967171s" podCreationTimestamp="2024-07-02 07:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:01:08.3045537 +0000 UTC m=+1.519541638" watchObservedRunningTime="2024-07-02 07:01:08.555967171 +0000 UTC m=+1.770955099" Jul 2 07:01:08.597528 kubelet[2313]: I0702 07:01:08.597466 2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.597444994 podStartE2EDuration="3.597444994s" podCreationTimestamp="2024-07-02 07:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:01:08.59670235 +0000 UTC m=+1.811690288" watchObservedRunningTime="2024-07-02 07:01:08.597444994 +0000 UTC m=+1.812432932" Jul 2 07:01:08.874442 kubelet[2313]: E0702 07:01:08.874331 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:08.875108 kubelet[2313]: E0702 07:01:08.875081 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:08.875618 kubelet[2313]: E0702 07:01:08.875600 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:11.081317 update_engine[1277]: I0702 07:01:11.081171 1277 update_attempter.cc:509] Updating boot flags... Jul 2 07:01:11.167248 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2364) Jul 2 07:01:11.208853 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2368) Jul 2 07:01:11.235161 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2368) Jul 2 07:01:13.916665 sudo[1429]: pam_unix(sudo:session): session closed for user root Jul 2 07:01:13.915000 audit[1429]: USER_END pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:01:13.917546 kernel: kauditd_printk_skb: 68 callbacks suppressed Jul 2 07:01:13.917589 kernel: audit: type=1106 audit(1719903673.915:375): pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:01:13.915000 audit[1429]: CRED_DISP pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:01:13.921145 sshd[1426]: pam_unix(sshd:session): session closed for user core Jul 2 07:01:13.923189 kernel: audit: type=1104 audit(1719903673.915:376): pid=1429 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:01:13.923276 kernel: audit: type=1106 audit(1719903673.921:377): pid=1426 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:13.921000 audit[1426]: USER_END pid=1426 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:13.923928 systemd[1]: sshd@6-10.0.0.127:22-10.0.0.1:53294.service: Deactivated successfully. Jul 2 07:01:13.924590 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:01:13.924715 systemd[1]: session-7.scope: Consumed 4.476s CPU time. Jul 2 07:01:13.925243 systemd-logind[1274]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:01:13.926105 systemd-logind[1274]: Removed session 7. Jul 2 07:01:13.921000 audit[1426]: CRED_DISP pid=1426 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:13.929769 kernel: audit: type=1104 audit(1719903673.921:378): pid=1426 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:13.929844 kernel: audit: type=1131 audit(1719903673.923:379): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.127:22-10.0.0.1:53294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:13.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.127:22-10.0.0.1:53294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:15.107641 kubelet[2313]: E0702 07:01:15.107591 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:15.261774 kubelet[2313]: E0702 07:01:15.261739 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:15.882858 kubelet[2313]: E0702 07:01:15.882815 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:15.883362 kubelet[2313]: E0702 07:01:15.883339 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:16.884447 kubelet[2313]: E0702 07:01:16.884407 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:18.540197 kubelet[2313]: E0702 07:01:18.540163 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:18.776000 audit[2175]: AVC avc: denied { watch } for pid=2175 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=7788 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jul 2 07:01:18.776000 audit[2175]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000d66ac0 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:01:18.784444 kernel: audit: type=1400 audit(1719903678.776:380): avc: denied { watch } for pid=2175 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=7788 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jul 2 07:01:18.784513 kernel: audit: type=1300 audit(1719903678.776:380): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000d66ac0 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:01:18.784540 kernel: audit: type=1327 audit(1719903678.776:380): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:01:18.776000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:01:20.125919 kubelet[2313]: I0702 07:01:20.125874 2313 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:01:20.126545 containerd[1289]: time="2024-07-02T07:01:20.126470728Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:01:20.126719 kubelet[2313]: I0702 07:01:20.126640 2313 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:01:20.976884 kubelet[2313]: I0702 07:01:20.976845 2313 topology_manager.go:215] "Topology Admit Handler" podUID="8441c562-dc85-4c05-96f7-e6f78b9e7f64" podNamespace="kube-system" podName="kube-proxy-s62jp" Jul 2 07:01:20.982145 systemd[1]: Created slice kubepods-besteffort-pod8441c562_dc85_4c05_96f7_e6f78b9e7f64.slice - libcontainer container kubepods-besteffort-pod8441c562_dc85_4c05_96f7_e6f78b9e7f64.slice. Jul 2 07:01:21.045256 kubelet[2313]: I0702 07:01:21.045157 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8441c562-dc85-4c05-96f7-e6f78b9e7f64-kube-proxy\") pod \"kube-proxy-s62jp\" (UID: \"8441c562-dc85-4c05-96f7-e6f78b9e7f64\") " pod="kube-system/kube-proxy-s62jp" Jul 2 07:01:21.045256 kubelet[2313]: I0702 07:01:21.045227 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8441c562-dc85-4c05-96f7-e6f78b9e7f64-xtables-lock\") pod \"kube-proxy-s62jp\" (UID: \"8441c562-dc85-4c05-96f7-e6f78b9e7f64\") " pod="kube-system/kube-proxy-s62jp" Jul 2 07:01:21.045256 kubelet[2313]: I0702 07:01:21.045253 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8441c562-dc85-4c05-96f7-e6f78b9e7f64-lib-modules\") pod \"kube-proxy-s62jp\" (UID: \"8441c562-dc85-4c05-96f7-e6f78b9e7f64\") " pod="kube-system/kube-proxy-s62jp" Jul 2 07:01:21.045447 kubelet[2313]: I0702 07:01:21.045280 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p6z2\" (UniqueName: \"kubernetes.io/projected/8441c562-dc85-4c05-96f7-e6f78b9e7f64-kube-api-access-6p6z2\") pod \"kube-proxy-s62jp\" (UID: \"8441c562-dc85-4c05-96f7-e6f78b9e7f64\") " pod="kube-system/kube-proxy-s62jp" Jul 2 07:01:21.188841 kubelet[2313]: I0702 07:01:21.188797 2313 topology_manager.go:215] "Topology Admit Handler" podUID="70868eaf-7964-4ad0-8d3e-d9857c892a1c" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-th72z" Jul 2 07:01:21.193627 systemd[1]: Created slice kubepods-besteffort-pod70868eaf_7964_4ad0_8d3e_d9857c892a1c.slice - libcontainer container kubepods-besteffort-pod70868eaf_7964_4ad0_8d3e_d9857c892a1c.slice. Jul 2 07:01:21.245871 kubelet[2313]: I0702 07:01:21.245839 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5h6w\" (UniqueName: \"kubernetes.io/projected/70868eaf-7964-4ad0-8d3e-d9857c892a1c-kube-api-access-m5h6w\") pod \"tigera-operator-76ff79f7fd-th72z\" (UID: \"70868eaf-7964-4ad0-8d3e-d9857c892a1c\") " pod="tigera-operator/tigera-operator-76ff79f7fd-th72z" Jul 2 07:01:21.245871 kubelet[2313]: I0702 07:01:21.245873 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/70868eaf-7964-4ad0-8d3e-d9857c892a1c-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-th72z\" (UID: \"70868eaf-7964-4ad0-8d3e-d9857c892a1c\") " pod="tigera-operator/tigera-operator-76ff79f7fd-th72z" Jul 2 07:01:21.289531 kubelet[2313]: E0702 07:01:21.289491 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:21.290023 containerd[1289]: time="2024-07-02T07:01:21.289989193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s62jp,Uid:8441c562-dc85-4c05-96f7-e6f78b9e7f64,Namespace:kube-system,Attempt:0,}" Jul 2 07:01:21.311158 containerd[1289]: time="2024-07-02T07:01:21.310656809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:01:21.311158 containerd[1289]: time="2024-07-02T07:01:21.311112189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:21.311158 containerd[1289]: time="2024-07-02T07:01:21.311151543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:01:21.311347 containerd[1289]: time="2024-07-02T07:01:21.311169047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:21.329231 systemd[1]: Started cri-containerd-ffd412fb6323326626389356cd8eeae6a898ffaf754dde354a5953e68f6068b3.scope - libcontainer container ffd412fb6323326626389356cd8eeae6a898ffaf754dde354a5953e68f6068b3. Jul 2 07:01:21.335000 audit: BPF prog-id=99 op=LOAD Jul 2 07:01:21.335000 audit: BPF prog-id=100 op=LOAD Jul 2 07:01:21.338147 kernel: audit: type=1334 audit(1719903681.335:381): prog-id=99 op=LOAD Jul 2 07:01:21.338185 kernel: audit: type=1334 audit(1719903681.335:382): prog-id=100 op=LOAD Jul 2 07:01:21.338202 kernel: audit: type=1300 audit(1719903681.335:382): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2426 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.335000 audit[2435]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2426 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666643431326662363332333332363632363338393335366364386565 Jul 2 07:01:21.344709 kernel: audit: type=1327 audit(1719903681.335:382): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666643431326662363332333332363632363338393335366364386565 Jul 2 07:01:21.344766 kernel: audit: type=1334 audit(1719903681.335:383): prog-id=101 op=LOAD Jul 2 07:01:21.335000 audit: BPF prog-id=101 op=LOAD Jul 2 07:01:21.335000 audit[2435]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2426 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.348861 kernel: audit: type=1300 audit(1719903681.335:383): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2426 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666643431326662363332333332363632363338393335366364386565 Jul 2 07:01:21.352303 kernel: audit: type=1327 audit(1719903681.335:383): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666643431326662363332333332363632363338393335366364386565 Jul 2 07:01:21.353328 kernel: audit: type=1334 audit(1719903681.335:384): prog-id=101 op=UNLOAD Jul 2 07:01:21.335000 audit: BPF prog-id=101 op=UNLOAD Jul 2 07:01:21.353394 containerd[1289]: time="2024-07-02T07:01:21.353047713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s62jp,Uid:8441c562-dc85-4c05-96f7-e6f78b9e7f64,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffd412fb6323326626389356cd8eeae6a898ffaf754dde354a5953e68f6068b3\"" Jul 2 07:01:21.354358 kernel: audit: type=1334 audit(1719903681.335:385): prog-id=100 op=UNLOAD Jul 2 07:01:21.335000 audit: BPF prog-id=100 op=UNLOAD Jul 2 07:01:21.354434 kubelet[2313]: E0702 07:01:21.353535 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:21.355502 kernel: audit: type=1334 audit(1719903681.335:386): prog-id=102 op=LOAD Jul 2 07:01:21.335000 audit: BPF prog-id=102 op=LOAD Jul 2 07:01:21.335000 audit[2435]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2426 pid=2435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6666643431326662363332333332363632363338393335366364386565 Jul 2 07:01:21.355957 containerd[1289]: time="2024-07-02T07:01:21.355911772Z" level=info msg="CreateContainer within sandbox \"ffd412fb6323326626389356cd8eeae6a898ffaf754dde354a5953e68f6068b3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:01:21.375224 containerd[1289]: time="2024-07-02T07:01:21.375170336Z" level=info msg="CreateContainer within sandbox \"ffd412fb6323326626389356cd8eeae6a898ffaf754dde354a5953e68f6068b3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a591d6f30ea7d1639892d47361d832172bfa3dc8474f50abc597e509fb514dde\"" Jul 2 07:01:21.375760 containerd[1289]: time="2024-07-02T07:01:21.375651956Z" level=info msg="StartContainer for \"a591d6f30ea7d1639892d47361d832172bfa3dc8474f50abc597e509fb514dde\"" Jul 2 07:01:21.398264 systemd[1]: Started cri-containerd-a591d6f30ea7d1639892d47361d832172bfa3dc8474f50abc597e509fb514dde.scope - libcontainer container a591d6f30ea7d1639892d47361d832172bfa3dc8474f50abc597e509fb514dde. Jul 2 07:01:21.407000 audit: BPF prog-id=103 op=LOAD Jul 2 07:01:21.407000 audit[2466]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2426 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135393164366633306561376431363339383932643437333631643833 Jul 2 07:01:21.407000 audit: BPF prog-id=104 op=LOAD Jul 2 07:01:21.407000 audit[2466]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2426 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135393164366633306561376431363339383932643437333631643833 Jul 2 07:01:21.407000 audit: BPF prog-id=104 op=UNLOAD Jul 2 07:01:21.407000 audit: BPF prog-id=103 op=UNLOAD Jul 2 07:01:21.407000 audit: BPF prog-id=105 op=LOAD Jul 2 07:01:21.407000 audit[2466]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2426 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135393164366633306561376431363339383932643437333631643833 Jul 2 07:01:21.420833 containerd[1289]: time="2024-07-02T07:01:21.420708854Z" level=info msg="StartContainer for \"a591d6f30ea7d1639892d47361d832172bfa3dc8474f50abc597e509fb514dde\" returns successfully" Jul 2 07:01:21.470000 audit[2521]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.470000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea921a9a0 a2=0 a3=7ffea921a98c items=0 ppid=2475 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.470000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 07:01:21.470000 audit[2520]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.470000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff6fe7fa60 a2=0 a3=a74b6dba955b3ac5 items=0 ppid=2475 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.470000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 07:01:21.471000 audit[2522]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.471000 audit[2522]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc10fe4280 a2=0 a3=7ffc10fe426c items=0 ppid=2475 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.471000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 07:01:21.472000 audit[2523]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.472000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffbf26afe0 a2=0 a3=7fffbf26afcc items=0 ppid=2475 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 07:01:21.472000 audit[2524]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.472000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff19839760 a2=0 a3=7fff1983974c items=0 ppid=2475 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.472000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 07:01:21.473000 audit[2525]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.473000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd1bf274f0 a2=0 a3=7ffd1bf274dc items=0 ppid=2475 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 07:01:21.497119 containerd[1289]: time="2024-07-02T07:01:21.497019538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-th72z,Uid:70868eaf-7964-4ad0-8d3e-d9857c892a1c,Namespace:tigera-operator,Attempt:0,}" Jul 2 07:01:21.516779 containerd[1289]: time="2024-07-02T07:01:21.516679381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:01:21.516779 containerd[1289]: time="2024-07-02T07:01:21.516755575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:21.516779 containerd[1289]: time="2024-07-02T07:01:21.516778257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:01:21.517028 containerd[1289]: time="2024-07-02T07:01:21.516796122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:21.534246 systemd[1]: Started cri-containerd-72e04a1c1c1d7e9988215ebcfcb699cafa4d57fd2f43aca7c61ca5dd8aeab328.scope - libcontainer container 72e04a1c1c1d7e9988215ebcfcb699cafa4d57fd2f43aca7c61ca5dd8aeab328. Jul 2 07:01:21.543000 audit: BPF prog-id=106 op=LOAD Jul 2 07:01:21.544000 audit: BPF prog-id=107 op=LOAD Jul 2 07:01:21.544000 audit[2543]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2533 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732653034613163316331643765393938383231356562636663623639 Jul 2 07:01:21.544000 audit: BPF prog-id=108 op=LOAD Jul 2 07:01:21.544000 audit[2543]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2533 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732653034613163316331643765393938383231356562636663623639 Jul 2 07:01:21.544000 audit: BPF prog-id=108 op=UNLOAD Jul 2 07:01:21.544000 audit: BPF prog-id=107 op=UNLOAD Jul 2 07:01:21.544000 audit: BPF prog-id=109 op=LOAD Jul 2 07:01:21.544000 audit[2543]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2533 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732653034613163316331643765393938383231356562636663623639 Jul 2 07:01:21.569117 containerd[1289]: time="2024-07-02T07:01:21.569071671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-th72z,Uid:70868eaf-7964-4ad0-8d3e-d9857c892a1c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"72e04a1c1c1d7e9988215ebcfcb699cafa4d57fd2f43aca7c61ca5dd8aeab328\"" Jul 2 07:01:21.570980 containerd[1289]: time="2024-07-02T07:01:21.570931863Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 07:01:21.574000 audit[2566]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.574000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffefe692310 a2=0 a3=7ffefe6922fc items=0 ppid=2475 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 07:01:21.577000 audit[2568]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.577000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc6f407320 a2=0 a3=7ffc6f40730c items=0 ppid=2475 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.577000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 2 07:01:21.580000 audit[2571]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.580000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd935924d0 a2=0 a3=7ffd935924bc items=0 ppid=2475 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.580000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 2 07:01:21.581000 audit[2572]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.581000 audit[2572]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0fd541f0 a2=0 a3=7fff0fd541dc items=0 ppid=2475 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 07:01:21.584000 audit[2574]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.584000 audit[2574]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe28290bb0 a2=0 a3=7ffe28290b9c items=0 ppid=2475 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.584000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 07:01:21.585000 audit[2575]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.585000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc033f9e30 a2=0 a3=7ffc033f9e1c items=0 ppid=2475 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 07:01:21.587000 audit[2577]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.587000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc8a315b90 a2=0 a3=7ffc8a315b7c items=0 ppid=2475 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.587000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 07:01:21.591000 audit[2580]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.591000 audit[2580]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdcf2be890 a2=0 a3=7ffdcf2be87c items=0 ppid=2475 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.591000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 2 07:01:21.592000 audit[2581]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.592000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb97d1ae0 a2=0 a3=7ffeb97d1acc items=0 ppid=2475 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 07:01:21.594000 audit[2583]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.594000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc70c362f0 a2=0 a3=7ffc70c362dc items=0 ppid=2475 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.594000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 07:01:21.595000 audit[2584]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.595000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcdc566d60 a2=0 a3=7ffcdc566d4c items=0 ppid=2475 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.595000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 07:01:21.597000 audit[2586]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.597000 audit[2586]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc68926140 a2=0 a3=7ffc6892612c items=0 ppid=2475 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.597000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 07:01:21.600000 audit[2589]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.600000 audit[2589]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcce1eac00 a2=0 a3=7ffcce1eabec items=0 ppid=2475 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.600000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 07:01:21.603000 audit[2592]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.603000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff7e04b450 a2=0 a3=7fff7e04b43c items=0 ppid=2475 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.603000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 07:01:21.604000 audit[2593]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.604000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd07d67aa0 a2=0 a3=7ffd07d67a8c items=0 ppid=2475 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.604000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 07:01:21.606000 audit[2595]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.606000 audit[2595]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc6f6fc770 a2=0 a3=7ffc6f6fc75c items=0 ppid=2475 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.606000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 07:01:21.609000 audit[2598]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.609000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd561ef6d0 a2=0 a3=7ffd561ef6bc items=0 ppid=2475 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.609000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 07:01:21.610000 audit[2599]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.610000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff21d4caf0 a2=0 a3=7fff21d4cadc items=0 ppid=2475 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 07:01:21.613000 audit[2601]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:01:21.613000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc15cd7780 a2=0 a3=7ffc15cd776c items=0 ppid=2475 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 07:01:21.627000 audit[2607]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:21.627000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffddf4927e0 a2=0 a3=7ffddf4927cc items=0 ppid=2475 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.627000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:21.636000 audit[2607]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:21.636000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffddf4927e0 a2=0 a3=7ffddf4927cc items=0 ppid=2475 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.636000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:21.638000 audit[2614]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.638000 audit[2614]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd5e04cfb0 a2=0 a3=7ffd5e04cf9c items=0 ppid=2475 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.638000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 07:01:21.640000 audit[2616]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2616 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.640000 audit[2616]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffece8177b0 a2=0 a3=7ffece81779c items=0 ppid=2475 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.640000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 2 07:01:21.643000 audit[2619]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2619 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.643000 audit[2619]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc7745b5b0 a2=0 a3=7ffc7745b59c items=0 ppid=2475 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.643000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 2 07:01:21.644000 audit[2620]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2620 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.644000 audit[2620]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd3a1b9d20 a2=0 a3=7ffd3a1b9d0c items=0 ppid=2475 pid=2620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.644000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 07:01:21.646000 audit[2622]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2622 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.646000 audit[2622]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd9af0acf0 a2=0 a3=7ffd9af0acdc items=0 ppid=2475 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.646000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 07:01:21.647000 audit[2623]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2623 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.647000 audit[2623]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc08db0820 a2=0 a3=7ffc08db080c items=0 ppid=2475 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.647000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 07:01:21.649000 audit[2625]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2625 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.649000 audit[2625]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd05294c10 a2=0 a3=7ffd05294bfc items=0 ppid=2475 pid=2625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 2 07:01:21.653000 audit[2628]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2628 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.653000 audit[2628]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdc51c43f0 a2=0 a3=7ffdc51c43dc items=0 ppid=2475 pid=2628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.653000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 07:01:21.654000 audit[2629]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2629 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.654000 audit[2629]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1a5443b0 a2=0 a3=7ffd1a54439c items=0 ppid=2475 pid=2629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.654000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 07:01:21.656000 audit[2631]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.656000 audit[2631]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffef89cf210 a2=0 a3=7ffef89cf1fc items=0 ppid=2475 pid=2631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.656000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 07:01:21.657000 audit[2632]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2632 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.657000 audit[2632]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce3b83190 a2=0 a3=7ffce3b8317c items=0 ppid=2475 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.657000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 07:01:21.659000 audit[2634]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2634 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.659000 audit[2634]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffda44f5990 a2=0 a3=7ffda44f597c items=0 ppid=2475 pid=2634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.659000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 07:01:21.662000 audit[2637]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2637 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.662000 audit[2637]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffeb6e80010 a2=0 a3=7ffeb6e7fffc items=0 ppid=2475 pid=2637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.662000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 07:01:21.665000 audit[2640]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2640 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.665000 audit[2640]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe143085a0 a2=0 a3=7ffe1430858c items=0 ppid=2475 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.665000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 2 07:01:21.666000 audit[2641]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2641 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.666000 audit[2641]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd53ee3c90 a2=0 a3=7ffd53ee3c7c items=0 ppid=2475 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.666000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 07:01:21.667000 audit[2643]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2643 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.667000 audit[2643]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc0cc30270 a2=0 a3=7ffc0cc3025c items=0 ppid=2475 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 07:01:21.670000 audit[2646]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2646 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.670000 audit[2646]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fffba97b540 a2=0 a3=7fffba97b52c items=0 ppid=2475 pid=2646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.670000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 07:01:21.671000 audit[2647]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2647 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.671000 audit[2647]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffc95a5330 a2=0 a3=7fffc95a531c items=0 ppid=2475 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.671000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 07:01:21.673000 audit[2649]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2649 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.673000 audit[2649]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdfaaa2a50 a2=0 a3=7ffdfaaa2a3c items=0 ppid=2475 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.673000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 07:01:21.674000 audit[2650]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2650 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.674000 audit[2650]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffff636df0 a2=0 a3=7fffff636ddc items=0 ppid=2475 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.674000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 07:01:21.676000 audit[2652]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2652 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.676000 audit[2652]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe19199af0 a2=0 a3=7ffe19199adc items=0 ppid=2475 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 07:01:21.679000 audit[2655]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2655 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:01:21.679000 audit[2655]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd977dfa60 a2=0 a3=7ffd977dfa4c items=0 ppid=2475 pid=2655 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.679000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 07:01:21.681000 audit[2657]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 07:01:21.681000 audit[2657]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7fffa5f7ef60 a2=0 a3=7fffa5f7ef4c items=0 ppid=2475 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.681000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:21.682000 audit[2657]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2657 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 07:01:21.682000 audit[2657]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffa5f7ef60 a2=0 a3=7fffa5f7ef4c items=0 ppid=2475 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:21.682000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:21.893062 kubelet[2313]: E0702 07:01:21.892951 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:22.854604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457925794.mount: Deactivated successfully. Jul 2 07:01:23.214029 containerd[1289]: time="2024-07-02T07:01:23.213908608Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:23.214940 containerd[1289]: time="2024-07-02T07:01:23.214882937Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076068" Jul 2 07:01:23.216111 containerd[1289]: time="2024-07-02T07:01:23.216045209Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:23.217724 containerd[1289]: time="2024-07-02T07:01:23.217699922Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:23.219668 containerd[1289]: time="2024-07-02T07:01:23.219619803Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:23.220437 containerd[1289]: time="2024-07-02T07:01:23.220406087Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 1.649424028s" Jul 2 07:01:23.220500 containerd[1289]: time="2024-07-02T07:01:23.220443608Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 07:01:23.223269 containerd[1289]: time="2024-07-02T07:01:23.223231637Z" level=info msg="CreateContainer within sandbox \"72e04a1c1c1d7e9988215ebcfcb699cafa4d57fd2f43aca7c61ca5dd8aeab328\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 07:01:23.234764 containerd[1289]: time="2024-07-02T07:01:23.234721636Z" level=info msg="CreateContainer within sandbox \"72e04a1c1c1d7e9988215ebcfcb699cafa4d57fd2f43aca7c61ca5dd8aeab328\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"525ecbfb5ae32a456c21e3d437df322052abf75b1fc250bc7042fb2abfc5ec26\"" Jul 2 07:01:23.235220 containerd[1289]: time="2024-07-02T07:01:23.235196301Z" level=info msg="StartContainer for \"525ecbfb5ae32a456c21e3d437df322052abf75b1fc250bc7042fb2abfc5ec26\"" Jul 2 07:01:23.262277 systemd[1]: Started cri-containerd-525ecbfb5ae32a456c21e3d437df322052abf75b1fc250bc7042fb2abfc5ec26.scope - libcontainer container 525ecbfb5ae32a456c21e3d437df322052abf75b1fc250bc7042fb2abfc5ec26. Jul 2 07:01:23.269000 audit: BPF prog-id=110 op=LOAD Jul 2 07:01:23.270000 audit: BPF prog-id=111 op=LOAD Jul 2 07:01:23.270000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2533 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:23.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532356563626662356165333261343536633231653364343337646633 Jul 2 07:01:23.270000 audit: BPF prog-id=112 op=LOAD Jul 2 07:01:23.270000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2533 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:23.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532356563626662356165333261343536633231653364343337646633 Jul 2 07:01:23.270000 audit: BPF prog-id=112 op=UNLOAD Jul 2 07:01:23.270000 audit: BPF prog-id=111 op=UNLOAD Jul 2 07:01:23.270000 audit: BPF prog-id=113 op=LOAD Jul 2 07:01:23.270000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2533 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:23.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532356563626662356165333261343536633231653364343337646633 Jul 2 07:01:23.283215 containerd[1289]: time="2024-07-02T07:01:23.283119848Z" level=info msg="StartContainer for \"525ecbfb5ae32a456c21e3d437df322052abf75b1fc250bc7042fb2abfc5ec26\" returns successfully" Jul 2 07:01:23.904630 kubelet[2313]: I0702 07:01:23.904427 2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s62jp" podStartSLOduration=3.9044045990000003 podStartE2EDuration="3.904404599s" podCreationTimestamp="2024-07-02 07:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:01:21.901255136 +0000 UTC m=+15.116243075" watchObservedRunningTime="2024-07-02 07:01:23.904404599 +0000 UTC m=+17.119392537" Jul 2 07:01:26.080000 audit[2708]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:26.080000 audit[2708]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc49c9e420 a2=0 a3=7ffc49c9e40c items=0 ppid=2475 pid=2708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:26.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:26.081000 audit[2708]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2708 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:26.081000 audit[2708]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc49c9e420 a2=0 a3=0 items=0 ppid=2475 pid=2708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:26.081000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:26.091000 audit[2710]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2710 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:26.091000 audit[2710]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffdfc2acea0 a2=0 a3=7ffdfc2ace8c items=0 ppid=2475 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:26.091000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:26.097000 audit[2710]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2710 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:26.097000 audit[2710]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdfc2acea0 a2=0 a3=0 items=0 ppid=2475 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:26.097000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:26.204058 kubelet[2313]: I0702 07:01:26.203988 2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-th72z" podStartSLOduration=3.553091547 podStartE2EDuration="5.203964651s" podCreationTimestamp="2024-07-02 07:01:21 +0000 UTC" firstStartedPulling="2024-07-02 07:01:21.570467386 +0000 UTC m=+14.785455314" lastFinishedPulling="2024-07-02 07:01:23.22134048 +0000 UTC m=+16.436328418" observedRunningTime="2024-07-02 07:01:23.905032223 +0000 UTC m=+17.120020161" watchObservedRunningTime="2024-07-02 07:01:26.203964651 +0000 UTC m=+19.418952599" Jul 2 07:01:26.204525 kubelet[2313]: I0702 07:01:26.204146 2313 topology_manager.go:215] "Topology Admit Handler" podUID="0804dae5-9130-48e0-9f33-aea5e67e250b" podNamespace="calico-system" podName="calico-typha-7564bc998b-vk8zv" Jul 2 07:01:26.213649 systemd[1]: Created slice kubepods-besteffort-pod0804dae5_9130_48e0_9f33_aea5e67e250b.slice - libcontainer container kubepods-besteffort-pod0804dae5_9130_48e0_9f33_aea5e67e250b.slice. Jul 2 07:01:26.245562 kubelet[2313]: I0702 07:01:26.245510 2313 topology_manager.go:215] "Topology Admit Handler" podUID="34442d6a-4c11-4988-8c5d-4653891d8aed" podNamespace="calico-system" podName="calico-node-rnxng" Jul 2 07:01:26.251255 systemd[1]: Created slice kubepods-besteffort-pod34442d6a_4c11_4988_8c5d_4653891d8aed.slice - libcontainer container kubepods-besteffort-pod34442d6a_4c11_4988_8c5d_4653891d8aed.slice. Jul 2 07:01:26.279837 kubelet[2313]: I0702 07:01:26.279777 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34442d6a-4c11-4988-8c5d-4653891d8aed-tigera-ca-bundle\") pod \"calico-node-rnxng\" (UID: \"34442d6a-4c11-4988-8c5d-4653891d8aed\") " pod="calico-system/calico-node-rnxng" Jul 2 07:01:26.279837 kubelet[2313]: I0702 07:01:26.279814 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/34442d6a-4c11-4988-8c5d-4653891d8aed-var-lib-calico\") pod \"calico-node-rnxng\" (UID: \"34442d6a-4c11-4988-8c5d-4653891d8aed\") " pod="calico-system/calico-node-rnxng" Jul 2 07:01:26.279837 kubelet[2313]: I0702 07:01:26.279836 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34442d6a-4c11-4988-8c5d-4653891d8aed-lib-modules\") pod \"calico-node-rnxng\" (UID: \"34442d6a-4c11-4988-8c5d-4653891d8aed\") " pod="calico-system/calico-node-rnxng" Jul 2 07:01:26.279837 kubelet[2313]: I0702 07:01:26.279852 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/34442d6a-4c11-4988-8c5d-4653891d8aed-var-run-calico\") pod \"calico-node-rnxng\" (UID: \"34442d6a-4c11-4988-8c5d-4653891d8aed\") " pod="calico-system/calico-node-rnxng" Jul 2 07:01:26.280181 kubelet[2313]: I0702 07:01:26.279882 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/34442d6a-4c11-4988-8c5d-4653891d8aed-cni-net-dir\") pod \"calico-node-rnxng\" (UID: \"34442d6a-4c11-4988-8c5d-4653891d8aed\") " pod="calico-system/calico-node-rnxng" Jul 2 07:01:26.280181 kubelet[2313]: I0702 07:01:26.279901 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/34442d6a-4c11-4988-8c5d-4653891d8aed-policysync\") pod \"calico-node-rnxng\" (UID: \"34442d6a-4c11-4988-8c5d-4653891d8aed\") " pod="calico-system/calico-node-rnxng" Jul 2 07:01:26.280181 kubelet[2313]: I0702 07:01:26.279921 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/34442d6a-4c11-4988-8c5d-4653891d8aed-flexvol-driver-host\") pod \"calico-node-rnxng\" (UID: \"34442d6a-4c11-4988-8c5d-4653891d8aed\") " pod="calico-system/calico-node-rnxng" Jul 2 07:01:26.280181 kubelet[2313]: I0702 07:01:26.279940 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkkrl\" (UniqueName: \"kubernetes.io/projected/34442d6a-4c11-4988-8c5d-4653891d8aed-kube-api-access-nkkrl\") pod \"calico-node-rnxng\" (UID: \"34442d6a-4c11-4988-8c5d-4653891d8aed\") " pod="calico-system/calico-node-rnxng" Jul 2 07:01:26.280181 kubelet[2313]: I0702 07:01:26.279956 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0804dae5-9130-48e0-9f33-aea5e67e250b-typha-certs\") pod \"calico-typha-7564bc998b-vk8zv\" (UID: \"0804dae5-9130-48e0-9f33-aea5e67e250b\") " pod="calico-system/calico-typha-7564bc998b-vk8zv" Jul 2 07:01:26.280374 kubelet[2313]: I0702 07:01:26.279973 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34442d6a-4c11-4988-8c5d-4653891d8aed-xtables-lock\") pod \"calico-node-rnxng\" (UID: \"34442d6a-4c11-4988-8c5d-4653891d8aed\") " pod="calico-system/calico-node-rnxng" Jul 2 07:01:26.280374 kubelet[2313]: I0702 07:01:26.280030 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/34442d6a-4c11-4988-8c5d-4653891d8aed-cni-bin-dir\") pod \"calico-node-rnxng\" (UID: \"34442d6a-4c11-4988-8c5d-4653891d8aed\") " pod="calico-system/calico-node-rnxng" Jul 2 07:01:26.280374 kubelet[2313]: I0702 07:01:26.280082 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0804dae5-9130-48e0-9f33-aea5e67e250b-tigera-ca-bundle\") pod \"calico-typha-7564bc998b-vk8zv\" (UID: \"0804dae5-9130-48e0-9f33-aea5e67e250b\") " pod="calico-system/calico-typha-7564bc998b-vk8zv" Jul 2 07:01:26.280374 kubelet[2313]: I0702 07:01:26.280108 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/34442d6a-4c11-4988-8c5d-4653891d8aed-node-certs\") pod \"calico-node-rnxng\" (UID: \"34442d6a-4c11-4988-8c5d-4653891d8aed\") " pod="calico-system/calico-node-rnxng" Jul 2 07:01:26.280374 kubelet[2313]: I0702 07:01:26.280145 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/34442d6a-4c11-4988-8c5d-4653891d8aed-cni-log-dir\") pod \"calico-node-rnxng\" (UID: \"34442d6a-4c11-4988-8c5d-4653891d8aed\") " pod="calico-system/calico-node-rnxng" Jul 2 07:01:26.280541 kubelet[2313]: I0702 07:01:26.280168 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24zjr\" (UniqueName: \"kubernetes.io/projected/0804dae5-9130-48e0-9f33-aea5e67e250b-kube-api-access-24zjr\") pod \"calico-typha-7564bc998b-vk8zv\" (UID: \"0804dae5-9130-48e0-9f33-aea5e67e250b\") " pod="calico-system/calico-typha-7564bc998b-vk8zv" Jul 2 07:01:26.356196 kubelet[2313]: I0702 07:01:26.356052 2313 topology_manager.go:215] "Topology Admit Handler" podUID="bb25da8c-03d5-4d1f-8f90-51fb2f280ed3" podNamespace="calico-system" podName="csi-node-driver-rnptn" Jul 2 07:01:26.356404 kubelet[2313]: E0702 07:01:26.356382 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnptn" podUID="bb25da8c-03d5-4d1f-8f90-51fb2f280ed3" Jul 2 07:01:26.380676 kubelet[2313]: I0702 07:01:26.380628 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bb25da8c-03d5-4d1f-8f90-51fb2f280ed3-socket-dir\") pod \"csi-node-driver-rnptn\" (UID: \"bb25da8c-03d5-4d1f-8f90-51fb2f280ed3\") " pod="calico-system/csi-node-driver-rnptn" Jul 2 07:01:26.380813 kubelet[2313]: I0702 07:01:26.380703 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5zbs\" (UniqueName: \"kubernetes.io/projected/bb25da8c-03d5-4d1f-8f90-51fb2f280ed3-kube-api-access-b5zbs\") pod \"csi-node-driver-rnptn\" (UID: \"bb25da8c-03d5-4d1f-8f90-51fb2f280ed3\") " pod="calico-system/csi-node-driver-rnptn" Jul 2 07:01:26.380813 kubelet[2313]: I0702 07:01:26.380794 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bb25da8c-03d5-4d1f-8f90-51fb2f280ed3-registration-dir\") pod \"csi-node-driver-rnptn\" (UID: \"bb25da8c-03d5-4d1f-8f90-51fb2f280ed3\") " pod="calico-system/csi-node-driver-rnptn" Jul 2 07:01:26.380952 kubelet[2313]: I0702 07:01:26.380904 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bb25da8c-03d5-4d1f-8f90-51fb2f280ed3-varrun\") pod \"csi-node-driver-rnptn\" (UID: \"bb25da8c-03d5-4d1f-8f90-51fb2f280ed3\") " pod="calico-system/csi-node-driver-rnptn" Jul 2 07:01:26.381020 kubelet[2313]: I0702 07:01:26.380955 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb25da8c-03d5-4d1f-8f90-51fb2f280ed3-kubelet-dir\") pod \"csi-node-driver-rnptn\" (UID: \"bb25da8c-03d5-4d1f-8f90-51fb2f280ed3\") " pod="calico-system/csi-node-driver-rnptn" Jul 2 07:01:26.381690 kubelet[2313]: E0702 07:01:26.381666 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.381792 kubelet[2313]: W0702 07:01:26.381689 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.381792 kubelet[2313]: E0702 07:01:26.381719 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.382083 kubelet[2313]: E0702 07:01:26.382066 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.382083 kubelet[2313]: W0702 07:01:26.382082 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.382180 kubelet[2313]: E0702 07:01:26.382097 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.382497 kubelet[2313]: E0702 07:01:26.382479 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.382497 kubelet[2313]: W0702 07:01:26.382496 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.382573 kubelet[2313]: E0702 07:01:26.382510 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.382738 kubelet[2313]: E0702 07:01:26.382723 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.382776 kubelet[2313]: W0702 07:01:26.382737 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.382776 kubelet[2313]: E0702 07:01:26.382751 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.384302 kubelet[2313]: E0702 07:01:26.384276 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.384302 kubelet[2313]: W0702 07:01:26.384300 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.384399 kubelet[2313]: E0702 07:01:26.384317 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.384571 kubelet[2313]: E0702 07:01:26.384561 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.384637 kubelet[2313]: W0702 07:01:26.384627 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.384741 kubelet[2313]: E0702 07:01:26.384716 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.385279 kubelet[2313]: E0702 07:01:26.385267 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.385357 kubelet[2313]: W0702 07:01:26.385345 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.385476 kubelet[2313]: E0702 07:01:26.385467 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.385608 kubelet[2313]: E0702 07:01:26.385600 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.385660 kubelet[2313]: W0702 07:01:26.385652 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.385774 kubelet[2313]: E0702 07:01:26.385763 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.385944 kubelet[2313]: E0702 07:01:26.385935 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.386002 kubelet[2313]: W0702 07:01:26.385993 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.386110 kubelet[2313]: E0702 07:01:26.386102 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.386244 kubelet[2313]: E0702 07:01:26.386237 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.386296 kubelet[2313]: W0702 07:01:26.386288 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.386401 kubelet[2313]: E0702 07:01:26.386388 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.386531 kubelet[2313]: E0702 07:01:26.386521 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.386593 kubelet[2313]: W0702 07:01:26.386584 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.386684 kubelet[2313]: E0702 07:01:26.386676 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.386817 kubelet[2313]: E0702 07:01:26.386810 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.386876 kubelet[2313]: W0702 07:01:26.386858 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.386968 kubelet[2313]: E0702 07:01:26.386960 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.387094 kubelet[2313]: E0702 07:01:26.387087 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.387181 kubelet[2313]: W0702 07:01:26.387173 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.387270 kubelet[2313]: E0702 07:01:26.387261 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.387407 kubelet[2313]: E0702 07:01:26.387398 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.387480 kubelet[2313]: W0702 07:01:26.387470 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.387604 kubelet[2313]: E0702 07:01:26.387593 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.387757 kubelet[2313]: E0702 07:01:26.387748 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.388238 kubelet[2313]: W0702 07:01:26.388227 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.388375 kubelet[2313]: E0702 07:01:26.388365 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.389301 kubelet[2313]: E0702 07:01:26.389291 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.389370 kubelet[2313]: W0702 07:01:26.389360 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.389519 kubelet[2313]: E0702 07:01:26.389510 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.400502 kubelet[2313]: E0702 07:01:26.400481 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.400630 kubelet[2313]: W0702 07:01:26.400617 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.400751 kubelet[2313]: E0702 07:01:26.400741 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.401334 kubelet[2313]: E0702 07:01:26.401325 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.401409 kubelet[2313]: W0702 07:01:26.401400 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.401558 kubelet[2313]: E0702 07:01:26.401549 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.401809 kubelet[2313]: E0702 07:01:26.401785 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.401888 kubelet[2313]: W0702 07:01:26.401878 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.401996 kubelet[2313]: E0702 07:01:26.401988 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.402168 kubelet[2313]: E0702 07:01:26.402160 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.402239 kubelet[2313]: W0702 07:01:26.402228 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.402349 kubelet[2313]: E0702 07:01:26.402340 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.402571 kubelet[2313]: E0702 07:01:26.402563 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.402629 kubelet[2313]: W0702 07:01:26.402621 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.402736 kubelet[2313]: E0702 07:01:26.402728 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.402854 kubelet[2313]: E0702 07:01:26.402846 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.402913 kubelet[2313]: W0702 07:01:26.402906 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.403023 kubelet[2313]: E0702 07:01:26.403015 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.403382 kubelet[2313]: E0702 07:01:26.403373 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.403433 kubelet[2313]: W0702 07:01:26.403425 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.403544 kubelet[2313]: E0702 07:01:26.403510 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.403659 kubelet[2313]: E0702 07:01:26.403650 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.403721 kubelet[2313]: W0702 07:01:26.403696 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.403884 kubelet[2313]: E0702 07:01:26.403783 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.405079 kubelet[2313]: E0702 07:01:26.405055 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.405079 kubelet[2313]: W0702 07:01:26.405072 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.405240 kubelet[2313]: E0702 07:01:26.405221 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.405462 kubelet[2313]: E0702 07:01:26.405437 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.405462 kubelet[2313]: W0702 07:01:26.405448 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.405583 kubelet[2313]: E0702 07:01:26.405501 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.406319 kubelet[2313]: E0702 07:01:26.406248 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.406319 kubelet[2313]: W0702 07:01:26.406265 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.406319 kubelet[2313]: E0702 07:01:26.406277 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.413106 kubelet[2313]: E0702 07:01:26.413069 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.413412 kubelet[2313]: W0702 07:01:26.413383 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.413573 kubelet[2313]: E0702 07:01:26.413533 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.482425 kubelet[2313]: E0702 07:01:26.482383 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.482425 kubelet[2313]: W0702 07:01:26.482417 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.482608 kubelet[2313]: E0702 07:01:26.482440 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.482691 kubelet[2313]: E0702 07:01:26.482668 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.482691 kubelet[2313]: W0702 07:01:26.482688 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.482753 kubelet[2313]: E0702 07:01:26.482700 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.482960 kubelet[2313]: E0702 07:01:26.482934 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.482960 kubelet[2313]: W0702 07:01:26.482955 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.483026 kubelet[2313]: E0702 07:01:26.482969 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.483231 kubelet[2313]: E0702 07:01:26.483209 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.483231 kubelet[2313]: W0702 07:01:26.483225 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.483307 kubelet[2313]: E0702 07:01:26.483233 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.483432 kubelet[2313]: E0702 07:01:26.483416 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.483432 kubelet[2313]: W0702 07:01:26.483428 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.483496 kubelet[2313]: E0702 07:01:26.483437 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.483622 kubelet[2313]: E0702 07:01:26.483601 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.483622 kubelet[2313]: W0702 07:01:26.483618 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.483688 kubelet[2313]: E0702 07:01:26.483627 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.483805 kubelet[2313]: E0702 07:01:26.483779 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.483805 kubelet[2313]: W0702 07:01:26.483794 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.483911 kubelet[2313]: E0702 07:01:26.483856 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.484025 kubelet[2313]: E0702 07:01:26.483992 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.484025 kubelet[2313]: W0702 07:01:26.484012 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.484100 kubelet[2313]: E0702 07:01:26.484070 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.484214 kubelet[2313]: E0702 07:01:26.484192 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.484214 kubelet[2313]: W0702 07:01:26.484210 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.484279 kubelet[2313]: E0702 07:01:26.484245 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.484511 kubelet[2313]: E0702 07:01:26.484487 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.484511 kubelet[2313]: W0702 07:01:26.484501 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.484619 kubelet[2313]: E0702 07:01:26.484578 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.484732 kubelet[2313]: E0702 07:01:26.484709 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.484732 kubelet[2313]: W0702 07:01:26.484728 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.484781 kubelet[2313]: E0702 07:01:26.484756 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.484989 kubelet[2313]: E0702 07:01:26.484956 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.484989 kubelet[2313]: W0702 07:01:26.484984 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.485052 kubelet[2313]: E0702 07:01:26.485001 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.485304 kubelet[2313]: E0702 07:01:26.485278 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.485335 kubelet[2313]: W0702 07:01:26.485302 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.485335 kubelet[2313]: E0702 07:01:26.485326 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.485540 kubelet[2313]: E0702 07:01:26.485514 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.485540 kubelet[2313]: W0702 07:01:26.485527 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.485767 kubelet[2313]: E0702 07:01:26.485655 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.485767 kubelet[2313]: W0702 07:01:26.485662 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.485847 kubelet[2313]: E0702 07:01:26.485819 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.485847 kubelet[2313]: W0702 07:01:26.485837 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.485933 kubelet[2313]: E0702 07:01:26.485848 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.485933 kubelet[2313]: E0702 07:01:26.485897 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.485933 kubelet[2313]: E0702 07:01:26.485907 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.486162 kubelet[2313]: E0702 07:01:26.486142 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.486162 kubelet[2313]: W0702 07:01:26.486155 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.486287 kubelet[2313]: E0702 07:01:26.486266 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.486433 kubelet[2313]: E0702 07:01:26.486409 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.486433 kubelet[2313]: W0702 07:01:26.486427 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.486516 kubelet[2313]: E0702 07:01:26.486447 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.486731 kubelet[2313]: E0702 07:01:26.486715 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.486731 kubelet[2313]: W0702 07:01:26.486730 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.486789 kubelet[2313]: E0702 07:01:26.486750 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.487065 kubelet[2313]: E0702 07:01:26.487041 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.487065 kubelet[2313]: W0702 07:01:26.487062 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.487159 kubelet[2313]: E0702 07:01:26.487080 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.487333 kubelet[2313]: E0702 07:01:26.487312 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.487333 kubelet[2313]: W0702 07:01:26.487328 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.487421 kubelet[2313]: E0702 07:01:26.487390 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.487571 kubelet[2313]: E0702 07:01:26.487517 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.487571 kubelet[2313]: W0702 07:01:26.487532 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.487632 kubelet[2313]: E0702 07:01:26.487584 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.487698 kubelet[2313]: E0702 07:01:26.487680 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.487698 kubelet[2313]: W0702 07:01:26.487693 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.487782 kubelet[2313]: E0702 07:01:26.487704 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.487982 kubelet[2313]: E0702 07:01:26.487945 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.487982 kubelet[2313]: W0702 07:01:26.487972 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.488213 kubelet[2313]: E0702 07:01:26.487997 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.488369 kubelet[2313]: E0702 07:01:26.488321 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.488369 kubelet[2313]: W0702 07:01:26.488361 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.488455 kubelet[2313]: E0702 07:01:26.488382 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.493548 kubelet[2313]: E0702 07:01:26.493535 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:26.493631 kubelet[2313]: W0702 07:01:26.493622 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:26.493684 kubelet[2313]: E0702 07:01:26.493675 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:26.518815 kubelet[2313]: E0702 07:01:26.518778 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:26.519361 containerd[1289]: time="2024-07-02T07:01:26.519316984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7564bc998b-vk8zv,Uid:0804dae5-9130-48e0-9f33-aea5e67e250b,Namespace:calico-system,Attempt:0,}" Jul 2 07:01:26.543162 containerd[1289]: time="2024-07-02T07:01:26.542905010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:01:26.543162 containerd[1289]: time="2024-07-02T07:01:26.542966626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:26.543162 containerd[1289]: time="2024-07-02T07:01:26.542993036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:01:26.543162 containerd[1289]: time="2024-07-02T07:01:26.543024966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:26.555658 kubelet[2313]: E0702 07:01:26.554669 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:26.555840 containerd[1289]: time="2024-07-02T07:01:26.555755860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rnxng,Uid:34442d6a-4c11-4988-8c5d-4653891d8aed,Namespace:calico-system,Attempt:0,}" Jul 2 07:01:26.565330 systemd[1]: Started cri-containerd-1d120741fba766fd450373fd24d47ead2ca93ba2f5e6d9beb710fc6213246c54.scope - libcontainer container 1d120741fba766fd450373fd24d47ead2ca93ba2f5e6d9beb710fc6213246c54. Jul 2 07:01:26.615000 audit: BPF prog-id=114 op=LOAD Jul 2 07:01:26.621038 kernel: kauditd_printk_skb: 202 callbacks suppressed Jul 2 07:01:26.621200 kernel: audit: type=1334 audit(1719903686.615:459): prog-id=114 op=LOAD Jul 2 07:01:26.621232 kernel: audit: type=1334 audit(1719903686.616:460): prog-id=115 op=LOAD Jul 2 07:01:26.616000 audit: BPF prog-id=115 op=LOAD Jul 2 07:01:26.616000 audit[2788]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2778 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:26.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164313230373431666261373636666434353033373366643234643437 Jul 2 07:01:26.631573 kernel: audit: type=1300 audit(1719903686.616:460): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2778 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:26.631760 kernel: audit: type=1327 audit(1719903686.616:460): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164313230373431666261373636666434353033373366643234643437 Jul 2 07:01:26.616000 audit: BPF prog-id=116 op=LOAD Jul 2 07:01:26.632989 kernel: audit: type=1334 audit(1719903686.616:461): prog-id=116 op=LOAD Jul 2 07:01:26.638411 kernel: audit: type=1300 audit(1719903686.616:461): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2778 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:26.638495 kernel: audit: type=1327 audit(1719903686.616:461): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164313230373431666261373636666434353033373366643234643437 Jul 2 07:01:26.616000 audit[2788]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2778 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:26.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164313230373431666261373636666434353033373366643234643437 Jul 2 07:01:26.616000 audit: BPF prog-id=116 op=UNLOAD Jul 2 07:01:26.657104 kernel: audit: type=1334 audit(1719903686.616:462): prog-id=116 op=UNLOAD Jul 2 07:01:26.657235 kernel: audit: type=1334 audit(1719903686.616:463): prog-id=115 op=UNLOAD Jul 2 07:01:26.657255 kernel: audit: type=1334 audit(1719903686.616:464): prog-id=117 op=LOAD Jul 2 07:01:26.616000 audit: BPF prog-id=115 op=UNLOAD Jul 2 07:01:26.616000 audit: BPF prog-id=117 op=LOAD Jul 2 07:01:26.616000 audit[2788]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2778 pid=2788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:26.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164313230373431666261373636666434353033373366643234643437 Jul 2 07:01:26.657514 containerd[1289]: time="2024-07-02T07:01:26.646897885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7564bc998b-vk8zv,Uid:0804dae5-9130-48e0-9f33-aea5e67e250b,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d120741fba766fd450373fd24d47ead2ca93ba2f5e6d9beb710fc6213246c54\"" Jul 2 07:01:26.657514 containerd[1289]: time="2024-07-02T07:01:26.655054958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 07:01:26.657636 kubelet[2313]: E0702 07:01:26.647684 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:26.808955 containerd[1289]: time="2024-07-02T07:01:26.808710241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:01:26.809628 containerd[1289]: time="2024-07-02T07:01:26.809580030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:26.809693 containerd[1289]: time="2024-07-02T07:01:26.809622130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:01:26.809693 containerd[1289]: time="2024-07-02T07:01:26.809642298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:26.827333 systemd[1]: Started cri-containerd-4a2051e63e04312534364ef30c15109ee32e5fff48a63ed9263a54abc6639ed5.scope - libcontainer container 4a2051e63e04312534364ef30c15109ee32e5fff48a63ed9263a54abc6639ed5. Jul 2 07:01:26.837000 audit: BPF prog-id=118 op=LOAD Jul 2 07:01:26.837000 audit: BPF prog-id=119 op=LOAD Jul 2 07:01:26.837000 audit[2830]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2818 pid=2830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:26.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461323035316536336530343331323533343336346566333063313531 Jul 2 07:01:26.837000 audit: BPF prog-id=120 op=LOAD Jul 2 07:01:26.837000 audit[2830]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2818 pid=2830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:26.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461323035316536336530343331323533343336346566333063313531 Jul 2 07:01:26.837000 audit: BPF prog-id=120 op=UNLOAD Jul 2 07:01:26.837000 audit: BPF prog-id=119 op=UNLOAD Jul 2 07:01:26.837000 audit: BPF prog-id=121 op=LOAD Jul 2 07:01:26.837000 audit[2830]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2818 pid=2830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:26.837000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3461323035316536336530343331323533343336346566333063313531 Jul 2 07:01:26.850585 containerd[1289]: time="2024-07-02T07:01:26.850341584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rnxng,Uid:34442d6a-4c11-4988-8c5d-4653891d8aed,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a2051e63e04312534364ef30c15109ee32e5fff48a63ed9263a54abc6639ed5\"" Jul 2 07:01:26.851523 kubelet[2313]: E0702 07:01:26.851469 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:27.111000 audit[2856]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:27.111000 audit[2856]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe994a80a0 a2=0 a3=7ffe994a808c items=0 ppid=2475 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:27.111000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:27.112000 audit[2856]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:27.112000 audit[2856]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe994a80a0 a2=0 a3=0 items=0 ppid=2475 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:27.112000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:27.862615 kubelet[2313]: E0702 07:01:27.862551 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnptn" podUID="bb25da8c-03d5-4d1f-8f90-51fb2f280ed3" Jul 2 07:01:29.862976 kubelet[2313]: E0702 07:01:29.862925 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnptn" podUID="bb25da8c-03d5-4d1f-8f90-51fb2f280ed3" Jul 2 07:01:30.015138 containerd[1289]: time="2024-07-02T07:01:30.015056055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:30.015997 containerd[1289]: time="2024-07-02T07:01:30.015831966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 07:01:30.016992 containerd[1289]: time="2024-07-02T07:01:30.016959499Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:30.018619 containerd[1289]: time="2024-07-02T07:01:30.018536277Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:30.022412 containerd[1289]: time="2024-07-02T07:01:30.020184289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:30.022539 containerd[1289]: time="2024-07-02T07:01:30.020720188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.365623793s" Jul 2 07:01:30.022576 containerd[1289]: time="2024-07-02T07:01:30.022557497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 07:01:30.023568 containerd[1289]: time="2024-07-02T07:01:30.023521863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 07:01:30.030387 containerd[1289]: time="2024-07-02T07:01:30.030333957Z" level=info msg="CreateContainer within sandbox \"1d120741fba766fd450373fd24d47ead2ca93ba2f5e6d9beb710fc6213246c54\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 07:01:30.046350 containerd[1289]: time="2024-07-02T07:01:30.046304964Z" level=info msg="CreateContainer within sandbox \"1d120741fba766fd450373fd24d47ead2ca93ba2f5e6d9beb710fc6213246c54\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"de3d6a4c59cc4dc3287f27133c06d67f819c9a2f4aea527c0ae10682e3ecb0af\"" Jul 2 07:01:30.046914 containerd[1289]: time="2024-07-02T07:01:30.046880748Z" level=info msg="StartContainer for \"de3d6a4c59cc4dc3287f27133c06d67f819c9a2f4aea527c0ae10682e3ecb0af\"" Jul 2 07:01:30.076264 systemd[1]: Started cri-containerd-de3d6a4c59cc4dc3287f27133c06d67f819c9a2f4aea527c0ae10682e3ecb0af.scope - libcontainer container de3d6a4c59cc4dc3287f27133c06d67f819c9a2f4aea527c0ae10682e3ecb0af. Jul 2 07:01:30.084000 audit: BPF prog-id=122 op=LOAD Jul 2 07:01:30.085000 audit: BPF prog-id=123 op=LOAD Jul 2 07:01:30.085000 audit[2871]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2778 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:30.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465336436613463353963633464633332383766323731333363303664 Jul 2 07:01:30.085000 audit: BPF prog-id=124 op=LOAD Jul 2 07:01:30.085000 audit[2871]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2778 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:30.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465336436613463353963633464633332383766323731333363303664 Jul 2 07:01:30.085000 audit: BPF prog-id=124 op=UNLOAD Jul 2 07:01:30.085000 audit: BPF prog-id=123 op=UNLOAD Jul 2 07:01:30.085000 audit: BPF prog-id=125 op=LOAD Jul 2 07:01:30.085000 audit[2871]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2778 pid=2871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:30.085000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465336436613463353963633464633332383766323731333363303664 Jul 2 07:01:30.117210 containerd[1289]: time="2024-07-02T07:01:30.114349873Z" level=info msg="StartContainer for \"de3d6a4c59cc4dc3287f27133c06d67f819c9a2f4aea527c0ae10682e3ecb0af\" returns successfully" Jul 2 07:01:30.912954 kubelet[2313]: E0702 07:01:30.912915 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:30.921499 kubelet[2313]: I0702 07:01:30.921438 2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7564bc998b-vk8zv" podStartSLOduration=1.552805485 podStartE2EDuration="4.921419367s" podCreationTimestamp="2024-07-02 07:01:26 +0000 UTC" firstStartedPulling="2024-07-02 07:01:26.6547056 +0000 UTC m=+19.869693538" lastFinishedPulling="2024-07-02 07:01:30.023319482 +0000 UTC m=+23.238307420" observedRunningTime="2024-07-02 07:01:30.921112409 +0000 UTC m=+24.136100347" watchObservedRunningTime="2024-07-02 07:01:30.921419367 +0000 UTC m=+24.136407305" Jul 2 07:01:31.007646 kubelet[2313]: E0702 07:01:31.007611 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.007646 kubelet[2313]: W0702 07:01:31.007638 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.007839 kubelet[2313]: E0702 07:01:31.007662 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.007910 kubelet[2313]: E0702 07:01:31.007896 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.007945 kubelet[2313]: W0702 07:01:31.007910 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.007945 kubelet[2313]: E0702 07:01:31.007923 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.008115 kubelet[2313]: E0702 07:01:31.008097 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.008115 kubelet[2313]: W0702 07:01:31.008107 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.008115 kubelet[2313]: E0702 07:01:31.008115 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.008386 kubelet[2313]: E0702 07:01:31.008365 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.008386 kubelet[2313]: W0702 07:01:31.008381 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.008436 kubelet[2313]: E0702 07:01:31.008390 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.008588 kubelet[2313]: E0702 07:01:31.008574 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.008588 kubelet[2313]: W0702 07:01:31.008587 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.008644 kubelet[2313]: E0702 07:01:31.008599 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.008798 kubelet[2313]: E0702 07:01:31.008784 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.008832 kubelet[2313]: W0702 07:01:31.008797 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.008832 kubelet[2313]: E0702 07:01:31.008809 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.008997 kubelet[2313]: E0702 07:01:31.008984 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.009021 kubelet[2313]: W0702 07:01:31.008997 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.009021 kubelet[2313]: E0702 07:01:31.009007 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.009228 kubelet[2313]: E0702 07:01:31.009215 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.009293 kubelet[2313]: W0702 07:01:31.009227 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.009293 kubelet[2313]: E0702 07:01:31.009237 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.009432 kubelet[2313]: E0702 07:01:31.009419 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.009457 kubelet[2313]: W0702 07:01:31.009433 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.009457 kubelet[2313]: E0702 07:01:31.009443 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.009640 kubelet[2313]: E0702 07:01:31.009626 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.009640 kubelet[2313]: W0702 07:01:31.009639 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.009697 kubelet[2313]: E0702 07:01:31.009660 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.009849 kubelet[2313]: E0702 07:01:31.009836 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.009849 kubelet[2313]: W0702 07:01:31.009846 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.009909 kubelet[2313]: E0702 07:01:31.009853 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.009997 kubelet[2313]: E0702 07:01:31.009985 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.010024 kubelet[2313]: W0702 07:01:31.009996 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.010024 kubelet[2313]: E0702 07:01:31.010005 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.010166 kubelet[2313]: E0702 07:01:31.010155 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.010197 kubelet[2313]: W0702 07:01:31.010165 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.010197 kubelet[2313]: E0702 07:01:31.010174 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.010328 kubelet[2313]: E0702 07:01:31.010318 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.010328 kubelet[2313]: W0702 07:01:31.010327 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.010373 kubelet[2313]: E0702 07:01:31.010334 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.010487 kubelet[2313]: E0702 07:01:31.010477 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.010487 kubelet[2313]: W0702 07:01:31.010486 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.010533 kubelet[2313]: E0702 07:01:31.010493 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.015874 kubelet[2313]: E0702 07:01:31.015850 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.015874 kubelet[2313]: W0702 07:01:31.015870 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.015945 kubelet[2313]: E0702 07:01:31.015891 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.016121 kubelet[2313]: E0702 07:01:31.016105 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.016121 kubelet[2313]: W0702 07:01:31.016115 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.016182 kubelet[2313]: E0702 07:01:31.016143 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.016342 kubelet[2313]: E0702 07:01:31.016316 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.016342 kubelet[2313]: W0702 07:01:31.016331 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.016342 kubelet[2313]: E0702 07:01:31.016345 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.016581 kubelet[2313]: E0702 07:01:31.016512 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.016581 kubelet[2313]: W0702 07:01:31.016519 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.016581 kubelet[2313]: E0702 07:01:31.016530 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.016723 kubelet[2313]: E0702 07:01:31.016710 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.016723 kubelet[2313]: W0702 07:01:31.016721 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.016821 kubelet[2313]: E0702 07:01:31.016737 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.016920 kubelet[2313]: E0702 07:01:31.016909 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.016920 kubelet[2313]: W0702 07:01:31.016918 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.017002 kubelet[2313]: E0702 07:01:31.016932 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.017178 kubelet[2313]: E0702 07:01:31.017161 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.017178 kubelet[2313]: W0702 07:01:31.017174 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.017178 kubelet[2313]: E0702 07:01:31.017187 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.017411 kubelet[2313]: E0702 07:01:31.017345 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.017411 kubelet[2313]: W0702 07:01:31.017353 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.017411 kubelet[2313]: E0702 07:01:31.017378 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.017523 kubelet[2313]: E0702 07:01:31.017509 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.017523 kubelet[2313]: W0702 07:01:31.017519 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.017588 kubelet[2313]: E0702 07:01:31.017543 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.017674 kubelet[2313]: E0702 07:01:31.017651 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.017674 kubelet[2313]: W0702 07:01:31.017669 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.017746 kubelet[2313]: E0702 07:01:31.017680 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.017852 kubelet[2313]: E0702 07:01:31.017838 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.017852 kubelet[2313]: W0702 07:01:31.017849 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.017926 kubelet[2313]: E0702 07:01:31.017863 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.018036 kubelet[2313]: E0702 07:01:31.018022 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.018036 kubelet[2313]: W0702 07:01:31.018032 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.018112 kubelet[2313]: E0702 07:01:31.018044 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.018240 kubelet[2313]: E0702 07:01:31.018229 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.018240 kubelet[2313]: W0702 07:01:31.018239 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.018315 kubelet[2313]: E0702 07:01:31.018253 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.018424 kubelet[2313]: E0702 07:01:31.018411 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.018424 kubelet[2313]: W0702 07:01:31.018420 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.018507 kubelet[2313]: E0702 07:01:31.018433 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.018636 kubelet[2313]: E0702 07:01:31.018619 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.018673 kubelet[2313]: W0702 07:01:31.018636 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.018673 kubelet[2313]: E0702 07:01:31.018656 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.021925 kubelet[2313]: E0702 07:01:31.019217 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.021925 kubelet[2313]: W0702 07:01:31.019232 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.021925 kubelet[2313]: E0702 07:01:31.019243 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.021925 kubelet[2313]: E0702 07:01:31.019534 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.021925 kubelet[2313]: W0702 07:01:31.019549 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.021925 kubelet[2313]: E0702 07:01:31.019557 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.021925 kubelet[2313]: E0702 07:01:31.019702 2313 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:01:31.021925 kubelet[2313]: W0702 07:01:31.019711 2313 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:01:31.021925 kubelet[2313]: E0702 07:01:31.019724 2313 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:01:31.640863 containerd[1289]: time="2024-07-02T07:01:31.640803900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:31.641955 containerd[1289]: time="2024-07-02T07:01:31.641861871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 07:01:31.643299 containerd[1289]: time="2024-07-02T07:01:31.643261405Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:31.645712 containerd[1289]: time="2024-07-02T07:01:31.645664408Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:31.649289 containerd[1289]: time="2024-07-02T07:01:31.649237563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:31.650214 containerd[1289]: time="2024-07-02T07:01:31.650182622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.626625343s" Jul 2 07:01:31.650274 containerd[1289]: time="2024-07-02T07:01:31.650220514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 07:01:31.653457 containerd[1289]: time="2024-07-02T07:01:31.652837478Z" level=info msg="CreateContainer within sandbox \"4a2051e63e04312534364ef30c15109ee32e5fff48a63ed9263a54abc6639ed5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 07:01:31.673953 containerd[1289]: time="2024-07-02T07:01:31.673908697Z" level=info msg="CreateContainer within sandbox \"4a2051e63e04312534364ef30c15109ee32e5fff48a63ed9263a54abc6639ed5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8006f5b364ece20739246544aae01e4ed065349232e2aaa77ab4949ac49a60f3\"" Jul 2 07:01:31.674595 containerd[1289]: time="2024-07-02T07:01:31.674549012Z" level=info msg="StartContainer for \"8006f5b364ece20739246544aae01e4ed065349232e2aaa77ab4949ac49a60f3\"" Jul 2 07:01:31.717422 systemd[1]: Started cri-containerd-8006f5b364ece20739246544aae01e4ed065349232e2aaa77ab4949ac49a60f3.scope - libcontainer container 8006f5b364ece20739246544aae01e4ed065349232e2aaa77ab4949ac49a60f3. Jul 2 07:01:31.728000 audit: BPF prog-id=126 op=LOAD Jul 2 07:01:31.730273 kernel: kauditd_printk_skb: 32 callbacks suppressed Jul 2 07:01:31.730341 kernel: audit: type=1334 audit(1719903691.728:479): prog-id=126 op=LOAD Jul 2 07:01:31.728000 audit[2949]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2818 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:31.734994 kernel: audit: type=1300 audit(1719903691.728:479): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2818 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:31.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303666356233363465636532303733393234363534346161653031 Jul 2 07:01:31.728000 audit: BPF prog-id=127 op=LOAD Jul 2 07:01:31.739715 kernel: audit: type=1327 audit(1719903691.728:479): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303666356233363465636532303733393234363534346161653031 Jul 2 07:01:31.739765 kernel: audit: type=1334 audit(1719903691.728:480): prog-id=127 op=LOAD Jul 2 07:01:31.743222 kernel: audit: type=1300 audit(1719903691.728:480): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2818 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:31.728000 audit[2949]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2818 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:31.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303666356233363465636532303733393234363534346161653031 Jul 2 07:01:31.746821 kernel: audit: type=1327 audit(1719903691.728:480): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303666356233363465636532303733393234363534346161653031 Jul 2 07:01:31.746971 kernel: audit: type=1334 audit(1719903691.728:481): prog-id=127 op=UNLOAD Jul 2 07:01:31.728000 audit: BPF prog-id=127 op=UNLOAD Jul 2 07:01:31.747705 kernel: audit: type=1334 audit(1719903691.728:482): prog-id=126 op=UNLOAD Jul 2 07:01:31.728000 audit: BPF prog-id=126 op=UNLOAD Jul 2 07:01:31.748492 kernel: audit: type=1334 audit(1719903691.728:483): prog-id=128 op=LOAD Jul 2 07:01:31.728000 audit: BPF prog-id=128 op=LOAD Jul 2 07:01:31.728000 audit[2949]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2818 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:31.752493 kernel: audit: type=1300 audit(1719903691.728:483): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2818 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:31.728000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3830303666356233363465636532303733393234363534346161653031 Jul 2 07:01:31.758252 systemd[1]: cri-containerd-8006f5b364ece20739246544aae01e4ed065349232e2aaa77ab4949ac49a60f3.scope: Deactivated successfully. Jul 2 07:01:31.763000 audit: BPF prog-id=128 op=UNLOAD Jul 2 07:01:31.910866 kubelet[2313]: E0702 07:01:31.910741 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnptn" podUID="bb25da8c-03d5-4d1f-8f90-51fb2f280ed3" Jul 2 07:01:31.917359 containerd[1289]: time="2024-07-02T07:01:31.917303095Z" level=info msg="StartContainer for \"8006f5b364ece20739246544aae01e4ed065349232e2aaa77ab4949ac49a60f3\" returns successfully" Jul 2 07:01:31.919288 kubelet[2313]: I0702 07:01:31.919264 2313 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 07:01:31.922321 kubelet[2313]: E0702 07:01:31.922306 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:31.952071 containerd[1289]: time="2024-07-02T07:01:31.952007592Z" level=info msg="shim disconnected" id=8006f5b364ece20739246544aae01e4ed065349232e2aaa77ab4949ac49a60f3 namespace=k8s.io Jul 2 07:01:31.952071 containerd[1289]: time="2024-07-02T07:01:31.952069540Z" level=warning msg="cleaning up after shim disconnected" id=8006f5b364ece20739246544aae01e4ed065349232e2aaa77ab4949ac49a60f3 namespace=k8s.io Jul 2 07:01:31.952278 containerd[1289]: time="2024-07-02T07:01:31.952077915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 07:01:32.027419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8006f5b364ece20739246544aae01e4ed065349232e2aaa77ab4949ac49a60f3-rootfs.mount: Deactivated successfully. Jul 2 07:01:32.922285 kubelet[2313]: E0702 07:01:32.922258 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:32.922874 containerd[1289]: time="2024-07-02T07:01:32.922831439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 07:01:33.863409 kubelet[2313]: E0702 07:01:33.863348 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnptn" podUID="bb25da8c-03d5-4d1f-8f90-51fb2f280ed3" Jul 2 07:01:35.863772 kubelet[2313]: E0702 07:01:35.863674 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnptn" podUID="bb25da8c-03d5-4d1f-8f90-51fb2f280ed3" Jul 2 07:01:36.201099 containerd[1289]: time="2024-07-02T07:01:36.200971587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:36.201868 containerd[1289]: time="2024-07-02T07:01:36.201800515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 07:01:36.203089 containerd[1289]: time="2024-07-02T07:01:36.203046458Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:36.204942 containerd[1289]: time="2024-07-02T07:01:36.204918818Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:36.206643 containerd[1289]: time="2024-07-02T07:01:36.206611390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:36.207283 containerd[1289]: time="2024-07-02T07:01:36.207252457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 3.284376274s" Jul 2 07:01:36.207350 containerd[1289]: time="2024-07-02T07:01:36.207286240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 07:01:36.209109 containerd[1289]: time="2024-07-02T07:01:36.209086685Z" level=info msg="CreateContainer within sandbox \"4a2051e63e04312534364ef30c15109ee32e5fff48a63ed9263a54abc6639ed5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 07:01:36.226794 containerd[1289]: time="2024-07-02T07:01:36.226752781Z" level=info msg="CreateContainer within sandbox \"4a2051e63e04312534364ef30c15109ee32e5fff48a63ed9263a54abc6639ed5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bcbf5a6163964c1e63ab25fbec6dc98304ffcc17605621b101e2d0541b17e87a\"" Jul 2 07:01:36.230656 containerd[1289]: time="2024-07-02T07:01:36.230602350Z" level=info msg="StartContainer for \"bcbf5a6163964c1e63ab25fbec6dc98304ffcc17605621b101e2d0541b17e87a\"" Jul 2 07:01:36.266361 systemd[1]: Started cri-containerd-bcbf5a6163964c1e63ab25fbec6dc98304ffcc17605621b101e2d0541b17e87a.scope - libcontainer container bcbf5a6163964c1e63ab25fbec6dc98304ffcc17605621b101e2d0541b17e87a. Jul 2 07:01:36.287000 audit: BPF prog-id=129 op=LOAD Jul 2 07:01:36.287000 audit[3021]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2818 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:36.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263626635613631363339363463316536336162323566626563366463 Jul 2 07:01:36.287000 audit: BPF prog-id=130 op=LOAD Jul 2 07:01:36.287000 audit[3021]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2818 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:36.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263626635613631363339363463316536336162323566626563366463 Jul 2 07:01:36.287000 audit: BPF prog-id=130 op=UNLOAD Jul 2 07:01:36.287000 audit: BPF prog-id=129 op=UNLOAD Jul 2 07:01:36.287000 audit: BPF prog-id=131 op=LOAD Jul 2 07:01:36.287000 audit[3021]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2818 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:36.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6263626635613631363339363463316536336162323566626563366463 Jul 2 07:01:37.138026 kubelet[2313]: E0702 07:01:37.137980 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnptn" podUID="bb25da8c-03d5-4d1f-8f90-51fb2f280ed3" Jul 2 07:01:37.310339 containerd[1289]: time="2024-07-02T07:01:37.310265766Z" level=info msg="StartContainer for \"bcbf5a6163964c1e63ab25fbec6dc98304ffcc17605621b101e2d0541b17e87a\" returns successfully" Jul 2 07:01:37.323593 kubelet[2313]: E0702 07:01:37.319005 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:38.320864 kubelet[2313]: E0702 07:01:38.320833 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:38.363708 systemd[1]: cri-containerd-bcbf5a6163964c1e63ab25fbec6dc98304ffcc17605621b101e2d0541b17e87a.scope: Deactivated successfully. Jul 2 07:01:38.366000 audit: BPF prog-id=131 op=UNLOAD Jul 2 07:01:38.368847 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 2 07:01:38.368900 kernel: audit: type=1334 audit(1719903698.366:490): prog-id=131 op=UNLOAD Jul 2 07:01:38.371618 kubelet[2313]: I0702 07:01:38.371581 2313 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 07:01:38.390486 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcbf5a6163964c1e63ab25fbec6dc98304ffcc17605621b101e2d0541b17e87a-rootfs.mount: Deactivated successfully. Jul 2 07:01:38.393472 kubelet[2313]: I0702 07:01:38.393367 2313 topology_manager.go:215] "Topology Admit Handler" podUID="0815c98c-0645-455a-b2ea-3705ee7d083c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-d6xh4" Jul 2 07:01:38.397476 kubelet[2313]: I0702 07:01:38.397382 2313 topology_manager.go:215] "Topology Admit Handler" podUID="ffd4bb71-e349-4c7c-bd03-9422990b17d3" podNamespace="kube-system" podName="coredns-7db6d8ff4d-kddkc" Jul 2 07:01:38.397476 kubelet[2313]: I0702 07:01:38.397475 2313 topology_manager.go:215] "Topology Admit Handler" podUID="f70926b2-a2c8-485f-8201-0e6ca8908647" podNamespace="calico-system" podName="calico-kube-controllers-546b9797b-qgj7r" Jul 2 07:01:38.399993 containerd[1289]: time="2024-07-02T07:01:38.399751490Z" level=info msg="shim disconnected" id=bcbf5a6163964c1e63ab25fbec6dc98304ffcc17605621b101e2d0541b17e87a namespace=k8s.io Jul 2 07:01:38.399993 containerd[1289]: time="2024-07-02T07:01:38.399871075Z" level=warning msg="cleaning up after shim disconnected" id=bcbf5a6163964c1e63ab25fbec6dc98304ffcc17605621b101e2d0541b17e87a namespace=k8s.io Jul 2 07:01:38.399993 containerd[1289]: time="2024-07-02T07:01:38.399881044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 07:01:38.405010 systemd[1]: Created slice kubepods-besteffort-podf70926b2_a2c8_485f_8201_0e6ca8908647.slice - libcontainer container kubepods-besteffort-podf70926b2_a2c8_485f_8201_0e6ca8908647.slice. Jul 2 07:01:38.411293 systemd[1]: Created slice kubepods-burstable-pod0815c98c_0645_455a_b2ea_3705ee7d083c.slice - libcontainer container kubepods-burstable-pod0815c98c_0645_455a_b2ea_3705ee7d083c.slice. Jul 2 07:01:38.418791 systemd[1]: Created slice kubepods-burstable-podffd4bb71_e349_4c7c_bd03_9422990b17d3.slice - libcontainer container kubepods-burstable-podffd4bb71_e349_4c7c_bd03_9422990b17d3.slice. Jul 2 07:01:38.530298 kubelet[2313]: I0702 07:01:38.530245 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ffd4bb71-e349-4c7c-bd03-9422990b17d3-config-volume\") pod \"coredns-7db6d8ff4d-kddkc\" (UID: \"ffd4bb71-e349-4c7c-bd03-9422990b17d3\") " pod="kube-system/coredns-7db6d8ff4d-kddkc" Jul 2 07:01:38.530298 kubelet[2313]: I0702 07:01:38.530288 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-576cj\" (UniqueName: \"kubernetes.io/projected/ffd4bb71-e349-4c7c-bd03-9422990b17d3-kube-api-access-576cj\") pod \"coredns-7db6d8ff4d-kddkc\" (UID: \"ffd4bb71-e349-4c7c-bd03-9422990b17d3\") " pod="kube-system/coredns-7db6d8ff4d-kddkc" Jul 2 07:01:38.530538 kubelet[2313]: I0702 07:01:38.530321 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj6p5\" (UniqueName: \"kubernetes.io/projected/f70926b2-a2c8-485f-8201-0e6ca8908647-kube-api-access-zj6p5\") pod \"calico-kube-controllers-546b9797b-qgj7r\" (UID: \"f70926b2-a2c8-485f-8201-0e6ca8908647\") " pod="calico-system/calico-kube-controllers-546b9797b-qgj7r" Jul 2 07:01:38.530538 kubelet[2313]: I0702 07:01:38.530362 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f70926b2-a2c8-485f-8201-0e6ca8908647-tigera-ca-bundle\") pod \"calico-kube-controllers-546b9797b-qgj7r\" (UID: \"f70926b2-a2c8-485f-8201-0e6ca8908647\") " pod="calico-system/calico-kube-controllers-546b9797b-qgj7r" Jul 2 07:01:38.530538 kubelet[2313]: I0702 07:01:38.530382 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2rzg\" (UniqueName: \"kubernetes.io/projected/0815c98c-0645-455a-b2ea-3705ee7d083c-kube-api-access-b2rzg\") pod \"coredns-7db6d8ff4d-d6xh4\" (UID: \"0815c98c-0645-455a-b2ea-3705ee7d083c\") " pod="kube-system/coredns-7db6d8ff4d-d6xh4" Jul 2 07:01:38.530538 kubelet[2313]: I0702 07:01:38.530403 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0815c98c-0645-455a-b2ea-3705ee7d083c-config-volume\") pod \"coredns-7db6d8ff4d-d6xh4\" (UID: \"0815c98c-0645-455a-b2ea-3705ee7d083c\") " pod="kube-system/coredns-7db6d8ff4d-d6xh4" Jul 2 07:01:38.711490 containerd[1289]: time="2024-07-02T07:01:38.711382000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546b9797b-qgj7r,Uid:f70926b2-a2c8-485f-8201-0e6ca8908647,Namespace:calico-system,Attempt:0,}" Jul 2 07:01:38.715706 kubelet[2313]: E0702 07:01:38.715671 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:38.716048 containerd[1289]: time="2024-07-02T07:01:38.716010019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d6xh4,Uid:0815c98c-0645-455a-b2ea-3705ee7d083c,Namespace:kube-system,Attempt:0,}" Jul 2 07:01:38.722688 kubelet[2313]: E0702 07:01:38.722649 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:38.723026 containerd[1289]: time="2024-07-02T07:01:38.722983647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kddkc,Uid:ffd4bb71-e349-4c7c-bd03-9422990b17d3,Namespace:kube-system,Attempt:0,}" Jul 2 07:01:38.833080 containerd[1289]: time="2024-07-02T07:01:38.832975385Z" level=error msg="Failed to destroy network for sandbox \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.833490 containerd[1289]: time="2024-07-02T07:01:38.833430430Z" level=error msg="encountered an error cleaning up failed sandbox \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.833553 containerd[1289]: time="2024-07-02T07:01:38.833511422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d6xh4,Uid:0815c98c-0645-455a-b2ea-3705ee7d083c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.833711 containerd[1289]: time="2024-07-02T07:01:38.833672044Z" level=error msg="Failed to destroy network for sandbox \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.833863 kubelet[2313]: E0702 07:01:38.833807 2313 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.833934 kubelet[2313]: E0702 07:01:38.833888 2313 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-d6xh4" Jul 2 07:01:38.833934 kubelet[2313]: E0702 07:01:38.833919 2313 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-d6xh4" Jul 2 07:01:38.834017 kubelet[2313]: E0702 07:01:38.833974 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-d6xh4_kube-system(0815c98c-0645-455a-b2ea-3705ee7d083c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-d6xh4_kube-system(0815c98c-0645-455a-b2ea-3705ee7d083c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-d6xh4" podUID="0815c98c-0645-455a-b2ea-3705ee7d083c" Jul 2 07:01:38.835323 containerd[1289]: time="2024-07-02T07:01:38.835274315Z" level=error msg="encountered an error cleaning up failed sandbox \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.835477 containerd[1289]: time="2024-07-02T07:01:38.835335791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546b9797b-qgj7r,Uid:f70926b2-a2c8-485f-8201-0e6ca8908647,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.835590 kubelet[2313]: E0702 07:01:38.835457 2313 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.835590 kubelet[2313]: E0702 07:01:38.835521 2313 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-546b9797b-qgj7r" Jul 2 07:01:38.835590 kubelet[2313]: E0702 07:01:38.835542 2313 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-546b9797b-qgj7r" Jul 2 07:01:38.835811 kubelet[2313]: E0702 07:01:38.835586 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-546b9797b-qgj7r_calico-system(f70926b2-a2c8-485f-8201-0e6ca8908647)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-546b9797b-qgj7r_calico-system(f70926b2-a2c8-485f-8201-0e6ca8908647)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-546b9797b-qgj7r" podUID="f70926b2-a2c8-485f-8201-0e6ca8908647" Jul 2 07:01:38.836432 containerd[1289]: time="2024-07-02T07:01:38.836382288Z" level=error msg="Failed to destroy network for sandbox \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.836730 containerd[1289]: time="2024-07-02T07:01:38.836685979Z" level=error msg="encountered an error cleaning up failed sandbox \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.836786 containerd[1289]: time="2024-07-02T07:01:38.836736534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kddkc,Uid:ffd4bb71-e349-4c7c-bd03-9422990b17d3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.836925 kubelet[2313]: E0702 07:01:38.836885 2313 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.836982 kubelet[2313]: E0702 07:01:38.836928 2313 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kddkc" Jul 2 07:01:38.836982 kubelet[2313]: E0702 07:01:38.836949 2313 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-kddkc" Jul 2 07:01:38.837055 kubelet[2313]: E0702 07:01:38.836986 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-kddkc_kube-system(ffd4bb71-e349-4c7c-bd03-9422990b17d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-kddkc_kube-system(ffd4bb71-e349-4c7c-bd03-9422990b17d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kddkc" podUID="ffd4bb71-e349-4c7c-bd03-9422990b17d3" Jul 2 07:01:38.870017 systemd[1]: Created slice kubepods-besteffort-podbb25da8c_03d5_4d1f_8f90_51fb2f280ed3.slice - libcontainer container kubepods-besteffort-podbb25da8c_03d5_4d1f_8f90_51fb2f280ed3.slice. Jul 2 07:01:38.872985 containerd[1289]: time="2024-07-02T07:01:38.872941026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnptn,Uid:bb25da8c-03d5-4d1f-8f90-51fb2f280ed3,Namespace:calico-system,Attempt:0,}" Jul 2 07:01:38.937324 containerd[1289]: time="2024-07-02T07:01:38.937221882Z" level=error msg="Failed to destroy network for sandbox \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.937666 containerd[1289]: time="2024-07-02T07:01:38.937625321Z" level=error msg="encountered an error cleaning up failed sandbox \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.937717 containerd[1289]: time="2024-07-02T07:01:38.937692226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnptn,Uid:bb25da8c-03d5-4d1f-8f90-51fb2f280ed3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.938004 kubelet[2313]: E0702 07:01:38.937958 2313 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:38.938085 kubelet[2313]: E0702 07:01:38.938027 2313 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rnptn" Jul 2 07:01:38.938085 kubelet[2313]: E0702 07:01:38.938050 2313 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rnptn" Jul 2 07:01:38.938168 kubelet[2313]: E0702 07:01:38.938094 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rnptn_calico-system(bb25da8c-03d5-4d1f-8f90-51fb2f280ed3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rnptn_calico-system(bb25da8c-03d5-4d1f-8f90-51fb2f280ed3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rnptn" podUID="bb25da8c-03d5-4d1f-8f90-51fb2f280ed3" Jul 2 07:01:39.324257 kubelet[2313]: I0702 07:01:39.324223 2313 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:01:39.324953 containerd[1289]: time="2024-07-02T07:01:39.324913564Z" level=info msg="StopPodSandbox for \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\"" Jul 2 07:01:39.325223 containerd[1289]: time="2024-07-02T07:01:39.325202567Z" level=info msg="Ensure that sandbox c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793 in task-service has been cleanup successfully" Jul 2 07:01:39.325788 kubelet[2313]: I0702 07:01:39.325765 2313 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:01:39.326286 containerd[1289]: time="2024-07-02T07:01:39.326239778Z" level=info msg="StopPodSandbox for \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\"" Jul 2 07:01:39.326604 containerd[1289]: time="2024-07-02T07:01:39.326532347Z" level=info msg="Ensure that sandbox 53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551 in task-service has been cleanup successfully" Jul 2 07:01:39.327713 kubelet[2313]: I0702 07:01:39.327685 2313 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:01:39.328044 containerd[1289]: time="2024-07-02T07:01:39.328017358Z" level=info msg="StopPodSandbox for \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\"" Jul 2 07:01:39.328363 containerd[1289]: time="2024-07-02T07:01:39.328336798Z" level=info msg="Ensure that sandbox 694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813 in task-service has been cleanup successfully" Jul 2 07:01:39.329374 kubelet[2313]: E0702 07:01:39.329347 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:39.330310 containerd[1289]: time="2024-07-02T07:01:39.330260664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 07:01:39.331292 kubelet[2313]: I0702 07:01:39.331248 2313 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:01:39.331965 containerd[1289]: time="2024-07-02T07:01:39.331931995Z" level=info msg="StopPodSandbox for \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\"" Jul 2 07:01:39.332255 containerd[1289]: time="2024-07-02T07:01:39.332215087Z" level=info msg="Ensure that sandbox 08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67 in task-service has been cleanup successfully" Jul 2 07:01:39.358772 containerd[1289]: time="2024-07-02T07:01:39.358715061Z" level=error msg="StopPodSandbox for \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\" failed" error="failed to destroy network for sandbox \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:39.359198 kubelet[2313]: E0702 07:01:39.359153 2313 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:01:39.359270 kubelet[2313]: E0702 07:01:39.359226 2313 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551"} Jul 2 07:01:39.359316 kubelet[2313]: E0702 07:01:39.359296 2313 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f70926b2-a2c8-485f-8201-0e6ca8908647\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 07:01:39.359399 kubelet[2313]: E0702 07:01:39.359335 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f70926b2-a2c8-485f-8201-0e6ca8908647\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-546b9797b-qgj7r" podUID="f70926b2-a2c8-485f-8201-0e6ca8908647" Jul 2 07:01:39.362250 containerd[1289]: time="2024-07-02T07:01:39.362198748Z" level=error msg="StopPodSandbox for \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\" failed" error="failed to destroy network for sandbox \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:39.362431 kubelet[2313]: E0702 07:01:39.362401 2313 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:01:39.362482 kubelet[2313]: E0702 07:01:39.362448 2313 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793"} Jul 2 07:01:39.362508 kubelet[2313]: E0702 07:01:39.362483 2313 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ffd4bb71-e349-4c7c-bd03-9422990b17d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 07:01:39.362561 kubelet[2313]: E0702 07:01:39.362513 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ffd4bb71-e349-4c7c-bd03-9422990b17d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-kddkc" podUID="ffd4bb71-e349-4c7c-bd03-9422990b17d3" Jul 2 07:01:39.368218 containerd[1289]: time="2024-07-02T07:01:39.368141537Z" level=error msg="StopPodSandbox for \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\" failed" error="failed to destroy network for sandbox \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:39.368411 kubelet[2313]: E0702 07:01:39.368365 2313 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:01:39.368464 kubelet[2313]: E0702 07:01:39.368424 2313 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813"} Jul 2 07:01:39.368464 kubelet[2313]: E0702 07:01:39.368454 2313 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb25da8c-03d5-4d1f-8f90-51fb2f280ed3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 07:01:39.368570 kubelet[2313]: E0702 07:01:39.368474 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb25da8c-03d5-4d1f-8f90-51fb2f280ed3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rnptn" podUID="bb25da8c-03d5-4d1f-8f90-51fb2f280ed3" Jul 2 07:01:39.380158 containerd[1289]: time="2024-07-02T07:01:39.380084442Z" level=error msg="StopPodSandbox for \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\" failed" error="failed to destroy network for sandbox \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:01:39.380381 kubelet[2313]: E0702 07:01:39.380338 2313 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:01:39.380440 kubelet[2313]: E0702 07:01:39.380389 2313 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67"} Jul 2 07:01:39.380440 kubelet[2313]: E0702 07:01:39.380424 2313 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0815c98c-0645-455a-b2ea-3705ee7d083c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 07:01:39.380538 kubelet[2313]: E0702 07:01:39.380446 2313 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0815c98c-0645-455a-b2ea-3705ee7d083c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-d6xh4" podUID="0815c98c-0645-455a-b2ea-3705ee7d083c" Jul 2 07:01:39.391003 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551-shm.mount: Deactivated successfully. Jul 2 07:01:39.391103 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67-shm.mount: Deactivated successfully. Jul 2 07:01:39.533810 systemd[1]: Started sshd@7-10.0.0.127:22-10.0.0.1:55158.service - OpenSSH per-connection server daemon (10.0.0.1:55158). Jul 2 07:01:39.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.127:22-10.0.0.1:55158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:39.538161 kernel: audit: type=1130 audit(1719903699.532:491): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.127:22-10.0.0.1:55158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:39.564000 audit[3326]: USER_ACCT pid=3326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:39.566313 sshd[3326]: Accepted publickey for core from 10.0.0.1 port 55158 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:01:39.567477 sshd[3326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:01:39.565000 audit[3326]: CRED_ACQ pid=3326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:39.571850 systemd-logind[1274]: New session 8 of user core. Jul 2 07:01:39.572672 kernel: audit: type=1101 audit(1719903699.564:492): pid=3326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:39.572713 kernel: audit: type=1103 audit(1719903699.565:493): pid=3326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:39.572732 kernel: audit: type=1006 audit(1719903699.565:494): pid=3326 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jul 2 07:01:39.565000 audit[3326]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd010db8a0 a2=3 a3=7f0c5dad6480 items=0 ppid=1 pid=3326 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:39.578046 kernel: audit: type=1300 audit(1719903699.565:494): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd010db8a0 a2=3 a3=7f0c5dad6480 items=0 ppid=1 pid=3326 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:39.565000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:01:39.579397 kernel: audit: type=1327 audit(1719903699.565:494): proctitle=737368643A20636F7265205B707269765D Jul 2 07:01:39.581421 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 07:01:39.585000 audit[3326]: USER_START pid=3326 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:39.586000 audit[3328]: CRED_ACQ pid=3328 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:39.592431 kernel: audit: type=1105 audit(1719903699.585:495): pid=3326 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:39.592487 kernel: audit: type=1103 audit(1719903699.586:496): pid=3328 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:39.707289 sshd[3326]: pam_unix(sshd:session): session closed for user core Jul 2 07:01:39.707000 audit[3326]: USER_END pid=3326 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:39.709750 systemd[1]: sshd@7-10.0.0.127:22-10.0.0.1:55158.service: Deactivated successfully. Jul 2 07:01:39.710669 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 07:01:39.711286 systemd-logind[1274]: Session 8 logged out. Waiting for processes to exit. Jul 2 07:01:39.712034 systemd-logind[1274]: Removed session 8. Jul 2 07:01:39.707000 audit[3326]: CRED_DISP pid=3326 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:39.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.127:22-10.0.0.1:55158 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:39.713204 kernel: audit: type=1106 audit(1719903699.707:497): pid=3326 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:43.934927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3897765788.mount: Deactivated successfully. Jul 2 07:01:44.719106 systemd[1]: Started sshd@8-10.0.0.127:22-10.0.0.1:38724.service - OpenSSH per-connection server daemon (10.0.0.1:38724). Jul 2 07:01:44.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.127:22-10.0.0.1:38724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:44.723443 kernel: kauditd_printk_skb: 2 callbacks suppressed Jul 2 07:01:44.723568 kernel: audit: type=1130 audit(1719903704.718:500): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.127:22-10.0.0.1:38724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:44.750000 audit[3346]: USER_ACCT pid=3346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:44.752098 sshd[3346]: Accepted publickey for core from 10.0.0.1 port 38724 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:01:44.753181 sshd[3346]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:01:44.751000 audit[3346]: CRED_ACQ pid=3346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:44.756945 systemd-logind[1274]: New session 9 of user core. Jul 2 07:01:44.758666 kernel: audit: type=1101 audit(1719903704.750:501): pid=3346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:44.758723 kernel: audit: type=1103 audit(1719903704.751:502): pid=3346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:44.758742 kernel: audit: type=1006 audit(1719903704.751:503): pid=3346 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jul 2 07:01:44.751000 audit[3346]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff65165170 a2=3 a3=7f605119e480 items=0 ppid=1 pid=3346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:44.763713 kernel: audit: type=1300 audit(1719903704.751:503): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff65165170 a2=3 a3=7f605119e480 items=0 ppid=1 pid=3346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:44.763826 kernel: audit: type=1327 audit(1719903704.751:503): proctitle=737368643A20636F7265205B707269765D Jul 2 07:01:44.751000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:01:44.770433 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 07:01:44.774000 audit[3346]: USER_START pid=3346 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:44.775000 audit[3348]: CRED_ACQ pid=3348 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:44.781316 kernel: audit: type=1105 audit(1719903704.774:504): pid=3346 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:44.781438 kernel: audit: type=1103 audit(1719903704.775:505): pid=3348 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:44.939369 sshd[3346]: pam_unix(sshd:session): session closed for user core Jul 2 07:01:44.939000 audit[3346]: USER_END pid=3346 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:44.941789 systemd[1]: sshd@8-10.0.0.127:22-10.0.0.1:38724.service: Deactivated successfully. Jul 2 07:01:44.942659 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 07:01:44.943204 systemd-logind[1274]: Session 9 logged out. Waiting for processes to exit. Jul 2 07:01:44.943860 systemd-logind[1274]: Removed session 9. Jul 2 07:01:44.939000 audit[3346]: CRED_DISP pid=3346 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:44.947846 kernel: audit: type=1106 audit(1719903704.939:506): pid=3346 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:44.947908 kernel: audit: type=1104 audit(1719903704.939:507): pid=3346 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:44.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.127:22-10.0.0.1:38724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:45.011597 kubelet[2313]: I0702 07:01:45.011568 2313 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 07:01:45.012152 kubelet[2313]: E0702 07:01:45.012122 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:45.159464 containerd[1289]: time="2024-07-02T07:01:45.159368656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:45.193026 containerd[1289]: time="2024-07-02T07:01:45.192951869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 07:01:45.192000 audit[3361]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=3361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:45.192000 audit[3361]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd02a9bcd0 a2=0 a3=7ffd02a9bcbc items=0 ppid=2475 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:45.192000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:45.194501 containerd[1289]: time="2024-07-02T07:01:45.194442108Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:45.196201 containerd[1289]: time="2024-07-02T07:01:45.196178328Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:45.197637 containerd[1289]: time="2024-07-02T07:01:45.197614996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:45.198301 containerd[1289]: time="2024-07-02T07:01:45.198250320Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 5.867384639s" Jul 2 07:01:45.198340 containerd[1289]: time="2024-07-02T07:01:45.198308038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 07:01:45.193000 audit[3361]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:45.193000 audit[3361]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd02a9bcd0 a2=0 a3=7ffd02a9bcbc items=0 ppid=2475 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:45.193000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:45.211623 containerd[1289]: time="2024-07-02T07:01:45.211573643Z" level=info msg="CreateContainer within sandbox \"4a2051e63e04312534364ef30c15109ee32e5fff48a63ed9263a54abc6639ed5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 07:01:45.228402 containerd[1289]: time="2024-07-02T07:01:45.228345602Z" level=info msg="CreateContainer within sandbox \"4a2051e63e04312534364ef30c15109ee32e5fff48a63ed9263a54abc6639ed5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4d7140d2a7eabf32be8d4233ecf772468801e97b42f14f4b8913e55aa23c449c\"" Jul 2 07:01:45.229104 containerd[1289]: time="2024-07-02T07:01:45.228893481Z" level=info msg="StartContainer for \"4d7140d2a7eabf32be8d4233ecf772468801e97b42f14f4b8913e55aa23c449c\"" Jul 2 07:01:45.314304 systemd[1]: Started cri-containerd-4d7140d2a7eabf32be8d4233ecf772468801e97b42f14f4b8913e55aa23c449c.scope - libcontainer container 4d7140d2a7eabf32be8d4233ecf772468801e97b42f14f4b8913e55aa23c449c. Jul 2 07:01:45.326000 audit: BPF prog-id=132 op=LOAD Jul 2 07:01:45.326000 audit[3372]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2818 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:45.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464373134306432613765616266333262653864343233336563663737 Jul 2 07:01:45.326000 audit: BPF prog-id=133 op=LOAD Jul 2 07:01:45.326000 audit[3372]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2818 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:45.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464373134306432613765616266333262653864343233336563663737 Jul 2 07:01:45.326000 audit: BPF prog-id=133 op=UNLOAD Jul 2 07:01:45.326000 audit: BPF prog-id=132 op=UNLOAD Jul 2 07:01:45.326000 audit: BPF prog-id=134 op=LOAD Jul 2 07:01:45.326000 audit[3372]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2818 pid=3372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:45.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464373134306432613765616266333262653864343233336563663737 Jul 2 07:01:45.950248 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 07:01:45.950401 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved. Jul 2 07:01:45.989955 containerd[1289]: time="2024-07-02T07:01:45.989908460Z" level=info msg="StartContainer for \"4d7140d2a7eabf32be8d4233ecf772468801e97b42f14f4b8913e55aa23c449c\" returns successfully" Jul 2 07:01:45.992949 kubelet[2313]: E0702 07:01:45.992903 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:45.993280 kubelet[2313]: E0702 07:01:45.992991 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:48.046000 audit[3474]: AVC avc: denied { write } for pid=3474 comm="tee" name="fd" dev="proc" ino=24815 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:01:48.048000 audit[3480]: AVC avc: denied { write } for pid=3480 comm="tee" name="fd" dev="proc" ino=23965 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:01:48.048000 audit[3480]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffffee5ea26 a2=241 a3=1b6 items=1 ppid=3440 pid=3480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.048000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 2 07:01:48.048000 audit: PATH item=0 name="/dev/fd/63" inode=24806 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:01:48.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:01:48.049000 audit[3478]: AVC avc: denied { write } for pid=3478 comm="tee" name="fd" dev="proc" ino=26975 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:01:48.049000 audit[3478]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffdf40fa26 a2=241 a3=1b6 items=1 ppid=3449 pid=3478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.049000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 2 07:01:48.049000 audit: PATH item=0 name="/dev/fd/63" inode=25924 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:01:48.049000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:01:48.050000 audit[3476]: AVC avc: denied { write } for pid=3476 comm="tee" name="fd" dev="proc" ino=24820 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:01:48.051000 audit[3505]: AVC avc: denied { write } for pid=3505 comm="tee" name="fd" dev="proc" ino=23972 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:01:48.051000 audit[3505]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc55beda17 a2=241 a3=1b6 items=1 ppid=3451 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.051000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 2 07:01:48.051000 audit: PATH item=0 name="/dev/fd/63" inode=23969 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:01:48.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:01:48.046000 audit[3474]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd20319a26 a2=241 a3=1b6 items=1 ppid=3439 pid=3474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.046000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 2 07:01:48.046000 audit: PATH item=0 name="/dev/fd/63" inode=24805 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:01:48.046000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:01:48.050000 audit[3476]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe1fae9a16 a2=241 a3=1b6 items=1 ppid=3447 pid=3476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.050000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 2 07:01:48.050000 audit: PATH item=0 name="/dev/fd/63" inode=25923 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:01:48.050000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:01:48.060000 audit[3502]: AVC avc: denied { write } for pid=3502 comm="tee" name="fd" dev="proc" ino=25931 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:01:48.060000 audit[3502]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdfb079a27 a2=241 a3=1b6 items=1 ppid=3441 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.060000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 2 07:01:48.060000 audit: PATH item=0 name="/dev/fd/63" inode=23962 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:01:48.060000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:01:48.084000 audit[3519]: AVC avc: denied { write } for pid=3519 comm="tee" name="fd" dev="proc" ino=25935 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:01:48.084000 audit[3519]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe5a6d7a28 a2=241 a3=1b6 items=1 ppid=3445 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.084000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 2 07:01:48.084000 audit: PATH item=0 name="/dev/fd/63" inode=26980 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:01:48.084000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:01:48.238785 systemd-networkd[1112]: vxlan.calico: Link UP Jul 2 07:01:48.238794 systemd-networkd[1112]: vxlan.calico: Gained carrier Jul 2 07:01:48.253000 audit: BPF prog-id=135 op=LOAD Jul 2 07:01:48.253000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc6bfbd590 a2=70 a3=7f5830017000 items=0 ppid=3442 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.253000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:01:48.254000 audit: BPF prog-id=135 op=UNLOAD Jul 2 07:01:48.254000 audit: BPF prog-id=136 op=LOAD Jul 2 07:01:48.254000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc6bfbd590 a2=70 a3=6f items=0 ppid=3442 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.254000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:01:48.254000 audit: BPF prog-id=136 op=UNLOAD Jul 2 07:01:48.254000 audit: BPF prog-id=137 op=LOAD Jul 2 07:01:48.254000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc6bfbd520 a2=70 a3=7ffc6bfbd590 items=0 ppid=3442 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.254000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:01:48.254000 audit: BPF prog-id=137 op=UNLOAD Jul 2 07:01:48.254000 audit: BPF prog-id=138 op=LOAD Jul 2 07:01:48.254000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6bfbd550 a2=70 a3=0 items=0 ppid=3442 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.254000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:01:48.265000 audit: BPF prog-id=138 op=UNLOAD Jul 2 07:01:48.312000 audit[3618]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3618 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:01:48.312000 audit[3618]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe1e7abc50 a2=0 a3=7ffe1e7abc3c items=0 ppid=3442 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.312000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:01:48.315000 audit[3617]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:01:48.315000 audit[3617]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff1ccb07f0 a2=0 a3=7fff1ccb07dc items=0 ppid=3442 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.315000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:01:48.318000 audit[3616]: NETFILTER_CFG table=raw:99 family=2 entries=19 op=nft_register_chain pid=3616 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:01:48.318000 audit[3616]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffd9f175b20 a2=0 a3=7ffd9f175b0c items=0 ppid=3442 pid=3616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.318000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:01:48.320000 audit[3620]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=3620 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:01:48.320000 audit[3620]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffcd7c49aa0 a2=0 a3=7ffcd7c49a8c items=0 ppid=3442 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:48.320000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:01:49.524328 systemd-networkd[1112]: vxlan.calico: Gained IPv6LL Jul 2 07:01:49.954119 systemd[1]: Started sshd@9-10.0.0.127:22-10.0.0.1:38740.service - OpenSSH per-connection server daemon (10.0.0.1:38740). Jul 2 07:01:49.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.127:22-10.0.0.1:38740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:49.955663 kernel: kauditd_printk_skb: 81 callbacks suppressed Jul 2 07:01:49.955771 kernel: audit: type=1130 audit(1719903709.953:535): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.127:22-10.0.0.1:38740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:49.984000 audit[3627]: USER_ACCT pid=3627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:49.985925 sshd[3627]: Accepted publickey for core from 10.0.0.1 port 38740 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:01:49.987226 sshd[3627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:01:49.985000 audit[3627]: CRED_ACQ pid=3627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:49.992422 systemd-logind[1274]: New session 10 of user core. Jul 2 07:01:49.993472 kernel: audit: type=1101 audit(1719903709.984:536): pid=3627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:49.993521 kernel: audit: type=1103 audit(1719903709.985:537): pid=3627 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:49.993539 kernel: audit: type=1006 audit(1719903709.985:538): pid=3627 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 2 07:01:49.985000 audit[3627]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd5ab6f80 a2=3 a3=7f45f77a3480 items=0 ppid=1 pid=3627 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:49.999244 kernel: audit: type=1300 audit(1719903709.985:538): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd5ab6f80 a2=3 a3=7f45f77a3480 items=0 ppid=1 pid=3627 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:49.999348 kernel: audit: type=1327 audit(1719903709.985:538): proctitle=737368643A20636F7265205B707269765D Jul 2 07:01:49.985000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:01:50.006345 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 07:01:50.009000 audit[3627]: USER_START pid=3627 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:50.009000 audit[3629]: CRED_ACQ pid=3629 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:50.018632 kernel: audit: type=1105 audit(1719903710.009:539): pid=3627 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:50.018741 kernel: audit: type=1103 audit(1719903710.009:540): pid=3629 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:50.132542 sshd[3627]: pam_unix(sshd:session): session closed for user core Jul 2 07:01:50.132000 audit[3627]: USER_END pid=3627 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:50.135699 systemd[1]: sshd@9-10.0.0.127:22-10.0.0.1:38740.service: Deactivated successfully. Jul 2 07:01:50.136522 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 07:01:50.137106 systemd-logind[1274]: Session 10 logged out. Waiting for processes to exit. Jul 2 07:01:50.132000 audit[3627]: CRED_DISP pid=3627 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:50.137930 systemd-logind[1274]: Removed session 10. Jul 2 07:01:50.139864 kernel: audit: type=1106 audit(1719903710.132:541): pid=3627 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:50.139932 kernel: audit: type=1104 audit(1719903710.132:542): pid=3627 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:50.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.127:22-10.0.0.1:38740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:50.864119 containerd[1289]: time="2024-07-02T07:01:50.864061990Z" level=info msg="StopPodSandbox for \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\"" Jul 2 07:01:50.867600 containerd[1289]: time="2024-07-02T07:01:50.867551091Z" level=info msg="StopPodSandbox for \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\"" Jul 2 07:01:51.056926 kubelet[2313]: I0702 07:01:51.056534 2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rnxng" podStartSLOduration=6.70958065 podStartE2EDuration="25.056513s" podCreationTimestamp="2024-07-02 07:01:26 +0000 UTC" firstStartedPulling="2024-07-02 07:01:26.85210102 +0000 UTC m=+20.067088958" lastFinishedPulling="2024-07-02 07:01:45.19903337 +0000 UTC m=+38.414021308" observedRunningTime="2024-07-02 07:01:46.381295478 +0000 UTC m=+39.596283416" watchObservedRunningTime="2024-07-02 07:01:51.056513 +0000 UTC m=+44.271500938" Jul 2 07:01:51.117278 containerd[1289]: 2024-07-02 07:01:51.057 [INFO][3677] k8s.go 608: Cleaning up netns ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:01:51.117278 containerd[1289]: 2024-07-02 07:01:51.057 [INFO][3677] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" iface="eth0" netns="/var/run/netns/cni-30805611-292a-210c-0b4f-da2bf2180d35" Jul 2 07:01:51.117278 containerd[1289]: 2024-07-02 07:01:51.057 [INFO][3677] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" iface="eth0" netns="/var/run/netns/cni-30805611-292a-210c-0b4f-da2bf2180d35" Jul 2 07:01:51.117278 containerd[1289]: 2024-07-02 07:01:51.057 [INFO][3677] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" iface="eth0" netns="/var/run/netns/cni-30805611-292a-210c-0b4f-da2bf2180d35" Jul 2 07:01:51.117278 containerd[1289]: 2024-07-02 07:01:51.057 [INFO][3677] k8s.go 615: Releasing IP address(es) ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:01:51.117278 containerd[1289]: 2024-07-02 07:01:51.057 [INFO][3677] utils.go 188: Calico CNI releasing IP address ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:01:51.117278 containerd[1289]: 2024-07-02 07:01:51.105 [INFO][3693] ipam_plugin.go 411: Releasing address using handleID ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" HandleID="k8s-pod-network.08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Workload="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:01:51.117278 containerd[1289]: 2024-07-02 07:01:51.105 [INFO][3693] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:01:51.117278 containerd[1289]: 2024-07-02 07:01:51.106 [INFO][3693] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:01:51.117278 containerd[1289]: 2024-07-02 07:01:51.113 [WARNING][3693] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" HandleID="k8s-pod-network.08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Workload="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:01:51.117278 containerd[1289]: 2024-07-02 07:01:51.113 [INFO][3693] ipam_plugin.go 439: Releasing address using workloadID ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" HandleID="k8s-pod-network.08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Workload="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:01:51.117278 containerd[1289]: 2024-07-02 07:01:51.114 [INFO][3693] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:01:51.117278 containerd[1289]: 2024-07-02 07:01:51.115 [INFO][3677] k8s.go 621: Teardown processing complete. ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:01:51.119480 containerd[1289]: time="2024-07-02T07:01:51.119432092Z" level=info msg="TearDown network for sandbox \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\" successfully" Jul 2 07:01:51.119578 containerd[1289]: time="2024-07-02T07:01:51.119558409Z" level=info msg="StopPodSandbox for \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\" returns successfully" Jul 2 07:01:51.119732 systemd[1]: run-netns-cni\x2d30805611\x2d292a\x2d210c\x2d0b4f\x2dda2bf2180d35.mount: Deactivated successfully. Jul 2 07:01:51.120549 kubelet[2313]: E0702 07:01:51.120524 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:51.121079 containerd[1289]: time="2024-07-02T07:01:51.121054177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d6xh4,Uid:0815c98c-0645-455a-b2ea-3705ee7d083c,Namespace:kube-system,Attempt:1,}" Jul 2 07:01:51.126547 containerd[1289]: 2024-07-02 07:01:51.066 [INFO][3678] k8s.go 608: Cleaning up netns ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:01:51.126547 containerd[1289]: 2024-07-02 07:01:51.067 [INFO][3678] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" iface="eth0" netns="/var/run/netns/cni-a2cab395-26a0-af84-0c53-234765039f8d" Jul 2 07:01:51.126547 containerd[1289]: 2024-07-02 07:01:51.067 [INFO][3678] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" iface="eth0" netns="/var/run/netns/cni-a2cab395-26a0-af84-0c53-234765039f8d" Jul 2 07:01:51.126547 containerd[1289]: 2024-07-02 07:01:51.067 [INFO][3678] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" iface="eth0" netns="/var/run/netns/cni-a2cab395-26a0-af84-0c53-234765039f8d" Jul 2 07:01:51.126547 containerd[1289]: 2024-07-02 07:01:51.067 [INFO][3678] k8s.go 615: Releasing IP address(es) ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:01:51.126547 containerd[1289]: 2024-07-02 07:01:51.067 [INFO][3678] utils.go 188: Calico CNI releasing IP address ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:01:51.126547 containerd[1289]: 2024-07-02 07:01:51.105 [INFO][3698] ipam_plugin.go 411: Releasing address using handleID ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" HandleID="k8s-pod-network.c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Workload="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:01:51.126547 containerd[1289]: 2024-07-02 07:01:51.105 [INFO][3698] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:01:51.126547 containerd[1289]: 2024-07-02 07:01:51.114 [INFO][3698] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:01:51.126547 containerd[1289]: 2024-07-02 07:01:51.120 [WARNING][3698] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" HandleID="k8s-pod-network.c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Workload="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:01:51.126547 containerd[1289]: 2024-07-02 07:01:51.120 [INFO][3698] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" HandleID="k8s-pod-network.c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Workload="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:01:51.126547 containerd[1289]: 2024-07-02 07:01:51.123 [INFO][3698] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:01:51.126547 containerd[1289]: 2024-07-02 07:01:51.124 [INFO][3678] k8s.go 621: Teardown processing complete. ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:01:51.127247 containerd[1289]: time="2024-07-02T07:01:51.126692653Z" level=info msg="TearDown network for sandbox \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\" successfully" Jul 2 07:01:51.127247 containerd[1289]: time="2024-07-02T07:01:51.126720765Z" level=info msg="StopPodSandbox for \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\" returns successfully" Jul 2 07:01:51.127442 kubelet[2313]: E0702 07:01:51.127396 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:51.127892 containerd[1289]: time="2024-07-02T07:01:51.127811823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kddkc,Uid:ffd4bb71-e349-4c7c-bd03-9422990b17d3,Namespace:kube-system,Attempt:1,}" Jul 2 07:01:51.128621 systemd[1]: run-netns-cni\x2da2cab395\x2d26a0\x2daf84\x2d0c53\x2d234765039f8d.mount: Deactivated successfully. Jul 2 07:01:51.377107 systemd-networkd[1112]: calib1b775950e0: Link UP Jul 2 07:01:51.379496 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:01:51.379554 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib1b775950e0: link becomes ready Jul 2 07:01:51.379647 systemd-networkd[1112]: calib1b775950e0: Gained carrier Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.181 [INFO][3710] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0 coredns-7db6d8ff4d- kube-system 0815c98c-0645-455a-b2ea-3705ee7d083c 782 0 2024-07-02 07:01:21 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-d6xh4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib1b775950e0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d6xh4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d6xh4-" Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.182 [INFO][3710] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d6xh4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.209 [INFO][3738] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" HandleID="k8s-pod-network.7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" Workload="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.222 [INFO][3738] ipam_plugin.go 264: Auto assigning IP ContainerID="7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" HandleID="k8s-pod-network.7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" Workload="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ddeb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-d6xh4", "timestamp":"2024-07-02 07:01:51.20947378 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.222 [INFO][3738] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.222 [INFO][3738] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.222 [INFO][3738] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.224 [INFO][3738] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" host="localhost" Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.229 [INFO][3738] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.233 [INFO][3738] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.237 [INFO][3738] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.240 [INFO][3738] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.240 [INFO][3738] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" host="localhost" Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.242 [INFO][3738] ipam.go 1685: Creating new handle: k8s-pod-network.7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.246 [INFO][3738] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" host="localhost" Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.372 [INFO][3738] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" host="localhost" Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.372 [INFO][3738] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" host="localhost" Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.372 [INFO][3738] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:01:51.390946 containerd[1289]: 2024-07-02 07:01:51.373 [INFO][3738] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" HandleID="k8s-pod-network.7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" Workload="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:01:51.391647 containerd[1289]: 2024-07-02 07:01:51.374 [INFO][3710] k8s.go 386: Populated endpoint ContainerID="7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d6xh4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0815c98c-0645-455a-b2ea-3705ee7d083c", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-d6xh4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1b775950e0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:01:51.391647 containerd[1289]: 2024-07-02 07:01:51.375 [INFO][3710] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d6xh4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:01:51.391647 containerd[1289]: 2024-07-02 07:01:51.375 [INFO][3710] dataplane_linux.go 68: Setting the host side veth name to calib1b775950e0 ContainerID="7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d6xh4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:01:51.391647 containerd[1289]: 2024-07-02 07:01:51.379 [INFO][3710] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d6xh4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:01:51.391647 containerd[1289]: 2024-07-02 07:01:51.380 [INFO][3710] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d6xh4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0815c98c-0645-455a-b2ea-3705ee7d083c", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee", Pod:"coredns-7db6d8ff4d-d6xh4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1b775950e0", MAC:"a6:8c:1d:31:a9:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:01:51.391647 containerd[1289]: 2024-07-02 07:01:51.387 [INFO][3710] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee" Namespace="kube-system" Pod="coredns-7db6d8ff4d-d6xh4" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:01:51.402000 audit[3770]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3770 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:01:51.402000 audit[3770]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7fff0ab90f10 a2=0 a3=7fff0ab90efc items=0 ppid=3442 pid=3770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:51.402000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:01:51.461287 containerd[1289]: time="2024-07-02T07:01:51.461200195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:01:51.461487 containerd[1289]: time="2024-07-02T07:01:51.461309270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:51.461487 containerd[1289]: time="2024-07-02T07:01:51.461343644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:01:51.461487 containerd[1289]: time="2024-07-02T07:01:51.461367970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:51.478361 systemd[1]: Started cri-containerd-7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee.scope - libcontainer container 7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee. Jul 2 07:01:51.485000 audit: BPF prog-id=139 op=LOAD Jul 2 07:01:51.486000 audit: BPF prog-id=140 op=LOAD Jul 2 07:01:51.486000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3779 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:51.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734373164383636663866336563653735643433383463336339343238 Jul 2 07:01:51.486000 audit: BPF prog-id=141 op=LOAD Jul 2 07:01:51.486000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3779 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:51.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734373164383636663866336563653735643433383463336339343238 Jul 2 07:01:51.486000 audit: BPF prog-id=141 op=UNLOAD Jul 2 07:01:51.486000 audit: BPF prog-id=140 op=UNLOAD Jul 2 07:01:51.486000 audit: BPF prog-id=142 op=LOAD Jul 2 07:01:51.486000 audit[3788]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3779 pid=3788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:51.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734373164383636663866336563653735643433383463336339343238 Jul 2 07:01:51.488045 systemd-resolved[1227]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:01:51.511107 containerd[1289]: time="2024-07-02T07:01:51.511057308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-d6xh4,Uid:0815c98c-0645-455a-b2ea-3705ee7d083c,Namespace:kube-system,Attempt:1,} returns sandbox id \"7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee\"" Jul 2 07:01:51.511932 kubelet[2313]: E0702 07:01:51.511916 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:51.514189 containerd[1289]: time="2024-07-02T07:01:51.514158680Z" level=info msg="CreateContainer within sandbox \"7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:01:51.658780 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali285cc9ffff5: link becomes ready Jul 2 07:01:51.657456 systemd-networkd[1112]: cali285cc9ffff5: Link UP Jul 2 07:01:51.658360 systemd-networkd[1112]: cali285cc9ffff5: Gained carrier Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.201 [INFO][3725] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0 coredns-7db6d8ff4d- kube-system ffd4bb71-e349-4c7c-bd03-9422990b17d3 783 0 2024-07-02 07:01:21 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-kddkc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali285cc9ffff5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kddkc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kddkc-" Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.201 [INFO][3725] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kddkc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.233 [INFO][3745] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" HandleID="k8s-pod-network.79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" Workload="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.241 [INFO][3745] ipam_plugin.go 264: Auto assigning IP ContainerID="79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" HandleID="k8s-pod-network.79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" Workload="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003086f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-kddkc", "timestamp":"2024-07-02 07:01:51.233250093 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.241 [INFO][3745] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.373 [INFO][3745] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.373 [INFO][3745] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.378 [INFO][3745] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" host="localhost" Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.387 [INFO][3745] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.438 [INFO][3745] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.440 [INFO][3745] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.442 [INFO][3745] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.442 [INFO][3745] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" host="localhost" Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.443 [INFO][3745] ipam.go 1685: Creating new handle: k8s-pod-network.79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0 Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.447 [INFO][3745] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" host="localhost" Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.625 [INFO][3745] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" host="localhost" Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.626 [INFO][3745] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" host="localhost" Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.626 [INFO][3745] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:01:52.152330 containerd[1289]: 2024-07-02 07:01:51.626 [INFO][3745] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" HandleID="k8s-pod-network.79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" Workload="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:01:52.153041 containerd[1289]: 2024-07-02 07:01:51.655 [INFO][3725] k8s.go 386: Populated endpoint ContainerID="79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kddkc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ffd4bb71-e349-4c7c-bd03-9422990b17d3", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-kddkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali285cc9ffff5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:01:52.153041 containerd[1289]: 2024-07-02 07:01:51.655 [INFO][3725] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kddkc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:01:52.153041 containerd[1289]: 2024-07-02 07:01:51.655 [INFO][3725] dataplane_linux.go 68: Setting the host side veth name to cali285cc9ffff5 ContainerID="79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kddkc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:01:52.153041 containerd[1289]: 2024-07-02 07:01:51.658 [INFO][3725] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kddkc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:01:52.153041 containerd[1289]: 2024-07-02 07:01:51.658 [INFO][3725] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kddkc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ffd4bb71-e349-4c7c-bd03-9422990b17d3", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0", Pod:"coredns-7db6d8ff4d-kddkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali285cc9ffff5", MAC:"ba:ec:79:50:91:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:01:52.153041 containerd[1289]: 2024-07-02 07:01:52.150 [INFO][3725] k8s.go 500: Wrote updated endpoint to datastore ContainerID="79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0" Namespace="kube-system" Pod="coredns-7db6d8ff4d-kddkc" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:01:52.159000 audit[3831]: NETFILTER_CFG table=filter:102 family=2 entries=30 op=nft_register_chain pid=3831 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:01:52.159000 audit[3831]: SYSCALL arch=c000003e syscall=46 success=yes exit=17032 a0=3 a1=7ffe06278b80 a2=0 a3=7ffe06278b6c items=0 ppid=3442 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:52.159000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:01:52.511497 containerd[1289]: time="2024-07-02T07:01:52.511426899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:01:52.511497 containerd[1289]: time="2024-07-02T07:01:52.511465822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:52.511497 containerd[1289]: time="2024-07-02T07:01:52.511481531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:01:52.511497 containerd[1289]: time="2024-07-02T07:01:52.511493273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:52.529306 systemd[1]: Started cri-containerd-79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0.scope - libcontainer container 79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0. Jul 2 07:01:52.537000 audit: BPF prog-id=143 op=LOAD Jul 2 07:01:52.537000 audit: BPF prog-id=144 op=LOAD Jul 2 07:01:52.537000 audit[3850]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=3840 pid=3850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:52.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739323038643165313438343065363833373439396534663136646434 Jul 2 07:01:52.537000 audit: BPF prog-id=145 op=LOAD Jul 2 07:01:52.537000 audit[3850]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=3840 pid=3850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:52.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739323038643165313438343065363833373439396534663136646434 Jul 2 07:01:52.537000 audit: BPF prog-id=145 op=UNLOAD Jul 2 07:01:52.537000 audit: BPF prog-id=144 op=UNLOAD Jul 2 07:01:52.537000 audit: BPF prog-id=146 op=LOAD Jul 2 07:01:52.537000 audit[3850]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=3840 pid=3850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:52.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739323038643165313438343065363833373439396534663136646434 Jul 2 07:01:52.539185 systemd-resolved[1227]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:01:52.560427 containerd[1289]: time="2024-07-02T07:01:52.560386500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-kddkc,Uid:ffd4bb71-e349-4c7c-bd03-9422990b17d3,Namespace:kube-system,Attempt:1,} returns sandbox id \"79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0\"" Jul 2 07:01:52.560972 kubelet[2313]: E0702 07:01:52.560956 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:52.567734 containerd[1289]: time="2024-07-02T07:01:52.567698316Z" level=info msg="CreateContainer within sandbox \"79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:01:52.596313 systemd-networkd[1112]: calib1b775950e0: Gained IPv6LL Jul 2 07:01:52.744024 containerd[1289]: time="2024-07-02T07:01:52.743948124Z" level=info msg="CreateContainer within sandbox \"7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"95cca3b6a83bd0b132f4ab66e4350dbc9d5424e9b2a731ec6c3ff5b18652fb25\"" Jul 2 07:01:52.744691 containerd[1289]: time="2024-07-02T07:01:52.744646946Z" level=info msg="StartContainer for \"95cca3b6a83bd0b132f4ab66e4350dbc9d5424e9b2a731ec6c3ff5b18652fb25\"" Jul 2 07:01:52.768238 systemd[1]: Started cri-containerd-95cca3b6a83bd0b132f4ab66e4350dbc9d5424e9b2a731ec6c3ff5b18652fb25.scope - libcontainer container 95cca3b6a83bd0b132f4ab66e4350dbc9d5424e9b2a731ec6c3ff5b18652fb25. Jul 2 07:01:52.777000 audit: BPF prog-id=147 op=LOAD Jul 2 07:01:52.778000 audit: BPF prog-id=148 op=LOAD Jul 2 07:01:52.778000 audit[3883]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=3779 pid=3883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:52.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636361336236613833626430623133326634616236366534333530 Jul 2 07:01:52.778000 audit: BPF prog-id=149 op=LOAD Jul 2 07:01:52.778000 audit[3883]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=3779 pid=3883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:52.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636361336236613833626430623133326634616236366534333530 Jul 2 07:01:52.778000 audit: BPF prog-id=149 op=UNLOAD Jul 2 07:01:52.778000 audit: BPF prog-id=148 op=UNLOAD Jul 2 07:01:52.778000 audit: BPF prog-id=150 op=LOAD Jul 2 07:01:52.778000 audit[3883]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=3779 pid=3883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:52.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935636361336236613833626430623133326634616236366534333530 Jul 2 07:01:52.917550 containerd[1289]: time="2024-07-02T07:01:52.917502523Z" level=info msg="StartContainer for \"95cca3b6a83bd0b132f4ab66e4350dbc9d5424e9b2a731ec6c3ff5b18652fb25\" returns successfully" Jul 2 07:01:53.006774 kubelet[2313]: E0702 07:01:53.006747 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:53.175000 audit[3915]: NETFILTER_CFG table=filter:103 family=2 entries=14 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:53.175000 audit[3915]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff810271f0 a2=0 a3=7fff810271dc items=0 ppid=2475 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:53.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:53.176000 audit[3915]: NETFILTER_CFG table=nat:104 family=2 entries=14 op=nft_register_rule pid=3915 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:53.176000 audit[3915]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff810271f0 a2=0 a3=0 items=0 ppid=2475 pid=3915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:53.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:53.236471 systemd-networkd[1112]: cali285cc9ffff5: Gained IPv6LL Jul 2 07:01:53.756692 containerd[1289]: time="2024-07-02T07:01:53.756577799Z" level=info msg="CreateContainer within sandbox \"79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"deed1275ab6637fe6477ae1445970bb6357f6e098ec1837678e79bc71b1af6a4\"" Jul 2 07:01:53.757260 containerd[1289]: time="2024-07-02T07:01:53.757200498Z" level=info msg="StartContainer for \"deed1275ab6637fe6477ae1445970bb6357f6e098ec1837678e79bc71b1af6a4\"" Jul 2 07:01:53.782493 systemd[1]: Started cri-containerd-deed1275ab6637fe6477ae1445970bb6357f6e098ec1837678e79bc71b1af6a4.scope - libcontainer container deed1275ab6637fe6477ae1445970bb6357f6e098ec1837678e79bc71b1af6a4. Jul 2 07:01:53.791000 audit: BPF prog-id=151 op=LOAD Jul 2 07:01:53.791000 audit: BPF prog-id=152 op=LOAD Jul 2 07:01:53.791000 audit[3930]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=3840 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:53.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465656431323735616236363337666536343737616531343435393730 Jul 2 07:01:53.791000 audit: BPF prog-id=153 op=LOAD Jul 2 07:01:53.791000 audit[3930]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=3840 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:53.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465656431323735616236363337666536343737616531343435393730 Jul 2 07:01:53.791000 audit: BPF prog-id=153 op=UNLOAD Jul 2 07:01:53.791000 audit: BPF prog-id=152 op=UNLOAD Jul 2 07:01:53.791000 audit: BPF prog-id=154 op=LOAD Jul 2 07:01:53.791000 audit[3930]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=3840 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:53.791000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465656431323735616236363337666536343737616531343435393730 Jul 2 07:01:53.863944 containerd[1289]: time="2024-07-02T07:01:53.863896696Z" level=info msg="StopPodSandbox for \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\"" Jul 2 07:01:53.864412 containerd[1289]: time="2024-07-02T07:01:53.864345469Z" level=info msg="StopPodSandbox for \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\"" Jul 2 07:01:53.934573 kubelet[2313]: I0702 07:01:53.934500 2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-d6xh4" podStartSLOduration=32.934480332 podStartE2EDuration="32.934480332s" podCreationTimestamp="2024-07-02 07:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:01:53.073565482 +0000 UTC m=+46.288553440" watchObservedRunningTime="2024-07-02 07:01:53.934480332 +0000 UTC m=+47.149468270" Jul 2 07:01:53.994969 containerd[1289]: time="2024-07-02T07:01:53.994923212Z" level=info msg="StartContainer for \"deed1275ab6637fe6477ae1445970bb6357f6e098ec1837678e79bc71b1af6a4\" returns successfully" Jul 2 07:01:54.012354 kubelet[2313]: E0702 07:01:54.010995 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:54.012354 kubelet[2313]: E0702 07:01:54.011400 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:54.231057 containerd[1289]: 2024-07-02 07:01:53.963 [INFO][3994] k8s.go 608: Cleaning up netns ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:01:54.231057 containerd[1289]: 2024-07-02 07:01:53.963 [INFO][3994] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" iface="eth0" netns="/var/run/netns/cni-76a33748-dc2e-e885-0d63-45b227c0e35e" Jul 2 07:01:54.231057 containerd[1289]: 2024-07-02 07:01:53.963 [INFO][3994] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" iface="eth0" netns="/var/run/netns/cni-76a33748-dc2e-e885-0d63-45b227c0e35e" Jul 2 07:01:54.231057 containerd[1289]: 2024-07-02 07:01:53.963 [INFO][3994] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" iface="eth0" netns="/var/run/netns/cni-76a33748-dc2e-e885-0d63-45b227c0e35e" Jul 2 07:01:54.231057 containerd[1289]: 2024-07-02 07:01:53.963 [INFO][3994] k8s.go 615: Releasing IP address(es) ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:01:54.231057 containerd[1289]: 2024-07-02 07:01:53.963 [INFO][3994] utils.go 188: Calico CNI releasing IP address ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:01:54.231057 containerd[1289]: 2024-07-02 07:01:53.980 [INFO][4010] ipam_plugin.go 411: Releasing address using handleID ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" HandleID="k8s-pod-network.694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Workload="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:01:54.231057 containerd[1289]: 2024-07-02 07:01:53.980 [INFO][4010] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:01:54.231057 containerd[1289]: 2024-07-02 07:01:53.980 [INFO][4010] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:01:54.231057 containerd[1289]: 2024-07-02 07:01:54.186 [WARNING][4010] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" HandleID="k8s-pod-network.694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Workload="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:01:54.231057 containerd[1289]: 2024-07-02 07:01:54.187 [INFO][4010] ipam_plugin.go 439: Releasing address using workloadID ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" HandleID="k8s-pod-network.694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Workload="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:01:54.231057 containerd[1289]: 2024-07-02 07:01:54.228 [INFO][4010] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:01:54.231057 containerd[1289]: 2024-07-02 07:01:54.230 [INFO][3994] k8s.go 621: Teardown processing complete. ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:01:54.233272 systemd[1]: run-netns-cni\x2d76a33748\x2ddc2e\x2de885\x2d0d63\x2d45b227c0e35e.mount: Deactivated successfully. Jul 2 07:01:54.234447 containerd[1289]: time="2024-07-02T07:01:54.234380688Z" level=info msg="TearDown network for sandbox \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\" successfully" Jul 2 07:01:54.234447 containerd[1289]: time="2024-07-02T07:01:54.234439759Z" level=info msg="StopPodSandbox for \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\" returns successfully" Jul 2 07:01:54.235143 containerd[1289]: time="2024-07-02T07:01:54.235102813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnptn,Uid:bb25da8c-03d5-4d1f-8f90-51fb2f280ed3,Namespace:calico-system,Attempt:1,}" Jul 2 07:01:54.258965 kubelet[2313]: I0702 07:01:54.258544 2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-kddkc" podStartSLOduration=33.258520235 podStartE2EDuration="33.258520235s" podCreationTimestamp="2024-07-02 07:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:01:54.25832038 +0000 UTC m=+47.473308328" watchObservedRunningTime="2024-07-02 07:01:54.258520235 +0000 UTC m=+47.473508163" Jul 2 07:01:54.466006 containerd[1289]: 2024-07-02 07:01:54.188 [INFO][3995] k8s.go 608: Cleaning up netns ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:01:54.466006 containerd[1289]: 2024-07-02 07:01:54.188 [INFO][3995] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" iface="eth0" netns="/var/run/netns/cni-794ba48a-cab5-2b04-1d30-7cae6d9b379f" Jul 2 07:01:54.466006 containerd[1289]: 2024-07-02 07:01:54.188 [INFO][3995] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" iface="eth0" netns="/var/run/netns/cni-794ba48a-cab5-2b04-1d30-7cae6d9b379f" Jul 2 07:01:54.466006 containerd[1289]: 2024-07-02 07:01:54.188 [INFO][3995] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" iface="eth0" netns="/var/run/netns/cni-794ba48a-cab5-2b04-1d30-7cae6d9b379f" Jul 2 07:01:54.466006 containerd[1289]: 2024-07-02 07:01:54.188 [INFO][3995] k8s.go 615: Releasing IP address(es) ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:01:54.466006 containerd[1289]: 2024-07-02 07:01:54.188 [INFO][3995] utils.go 188: Calico CNI releasing IP address ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:01:54.466006 containerd[1289]: 2024-07-02 07:01:54.204 [INFO][4019] ipam_plugin.go 411: Releasing address using handleID ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" HandleID="k8s-pod-network.53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Workload="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:01:54.466006 containerd[1289]: 2024-07-02 07:01:54.204 [INFO][4019] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:01:54.466006 containerd[1289]: 2024-07-02 07:01:54.229 [INFO][4019] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:01:54.466006 containerd[1289]: 2024-07-02 07:01:54.460 [WARNING][4019] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" HandleID="k8s-pod-network.53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Workload="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:01:54.466006 containerd[1289]: 2024-07-02 07:01:54.460 [INFO][4019] ipam_plugin.go 439: Releasing address using workloadID ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" HandleID="k8s-pod-network.53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Workload="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:01:54.466006 containerd[1289]: 2024-07-02 07:01:54.462 [INFO][4019] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:01:54.466006 containerd[1289]: 2024-07-02 07:01:54.463 [INFO][3995] k8s.go 621: Teardown processing complete. ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:01:54.466790 containerd[1289]: time="2024-07-02T07:01:54.466751753Z" level=info msg="TearDown network for sandbox \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\" successfully" Jul 2 07:01:54.466876 containerd[1289]: time="2024-07-02T07:01:54.466857962Z" level=info msg="StopPodSandbox for \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\" returns successfully" Jul 2 07:01:54.467619 containerd[1289]: time="2024-07-02T07:01:54.467585297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546b9797b-qgj7r,Uid:f70926b2-a2c8-485f-8201-0e6ca8908647,Namespace:calico-system,Attempt:1,}" Jul 2 07:01:54.469569 systemd[1]: run-netns-cni\x2d794ba48a\x2dcab5\x2d2b04\x2d1d30\x2d7cae6d9b379f.mount: Deactivated successfully. Jul 2 07:01:54.520000 audit[4029]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:54.520000 audit[4029]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffcce0940e0 a2=0 a3=7ffcce0940cc items=0 ppid=2475 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:54.520000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:54.521000 audit[4029]: NETFILTER_CFG table=nat:106 family=2 entries=14 op=nft_register_rule pid=4029 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:54.521000 audit[4029]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcce0940e0 a2=0 a3=0 items=0 ppid=2475 pid=4029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:54.521000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:54.534000 audit[4031]: NETFILTER_CFG table=filter:107 family=2 entries=11 op=nft_register_rule pid=4031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:54.534000 audit[4031]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe1bb4ac10 a2=0 a3=7ffe1bb4abfc items=0 ppid=2475 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:54.534000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:54.536000 audit[4031]: NETFILTER_CFG table=nat:108 family=2 entries=35 op=nft_register_chain pid=4031 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:54.536000 audit[4031]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe1bb4ac10 a2=0 a3=7ffe1bb4abfc items=0 ppid=2475 pid=4031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:54.536000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:54.972271 systemd-networkd[1112]: califce2ec4ddd9: Link UP Jul 2 07:01:54.973236 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:01:54.973305 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califce2ec4ddd9: link becomes ready Jul 2 07:01:54.973559 systemd-networkd[1112]: califce2ec4ddd9: Gained carrier Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.906 [INFO][4039] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rnptn-eth0 csi-node-driver- calico-system bb25da8c-03d5-4d1f-8f90-51fb2f280ed3 811 0 2024-07-02 07:01:26 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-rnptn eth0 default [] [] [kns.calico-system ksa.calico-system.default] califce2ec4ddd9 [] []}} ContainerID="12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" Namespace="calico-system" Pod="csi-node-driver-rnptn" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnptn-" Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.906 [INFO][4039] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" Namespace="calico-system" Pod="csi-node-driver-rnptn" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.931 [INFO][4059] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" HandleID="k8s-pod-network.12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" Workload="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.941 [INFO][4059] ipam_plugin.go 264: Auto assigning IP ContainerID="12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" HandleID="k8s-pod-network.12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" Workload="localhost-k8s-csi--node--driver--rnptn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fcc80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rnptn", "timestamp":"2024-07-02 07:01:54.931909011 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.941 [INFO][4059] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.942 [INFO][4059] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.942 [INFO][4059] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.944 [INFO][4059] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" host="localhost" Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.947 [INFO][4059] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.951 [INFO][4059] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.952 [INFO][4059] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.954 [INFO][4059] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.954 [INFO][4059] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" host="localhost" Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.955 [INFO][4059] ipam.go 1685: Creating new handle: k8s-pod-network.12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809 Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.958 [INFO][4059] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" host="localhost" Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.963 [INFO][4059] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" host="localhost" Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.963 [INFO][4059] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" host="localhost" Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.963 [INFO][4059] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:01:54.992108 containerd[1289]: 2024-07-02 07:01:54.963 [INFO][4059] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" HandleID="k8s-pod-network.12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" Workload="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:01:54.992795 containerd[1289]: 2024-07-02 07:01:54.968 [INFO][4039] k8s.go 386: Populated endpoint ContainerID="12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" Namespace="calico-system" Pod="csi-node-driver-rnptn" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnptn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rnptn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb25da8c-03d5-4d1f-8f90-51fb2f280ed3", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rnptn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califce2ec4ddd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:01:54.992795 containerd[1289]: 2024-07-02 07:01:54.968 [INFO][4039] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" Namespace="calico-system" Pod="csi-node-driver-rnptn" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:01:54.992795 containerd[1289]: 2024-07-02 07:01:54.968 [INFO][4039] dataplane_linux.go 68: Setting the host side veth name to califce2ec4ddd9 ContainerID="12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" Namespace="calico-system" Pod="csi-node-driver-rnptn" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:01:54.992795 containerd[1289]: 2024-07-02 07:01:54.974 [INFO][4039] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" Namespace="calico-system" Pod="csi-node-driver-rnptn" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:01:54.992795 containerd[1289]: 2024-07-02 07:01:54.974 [INFO][4039] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" Namespace="calico-system" Pod="csi-node-driver-rnptn" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnptn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rnptn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb25da8c-03d5-4d1f-8f90-51fb2f280ed3", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809", Pod:"csi-node-driver-rnptn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califce2ec4ddd9", MAC:"16:55:86:5d:88:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:01:54.992795 containerd[1289]: 2024-07-02 07:01:54.986 [INFO][4039] k8s.go 500: Wrote updated endpoint to datastore ContainerID="12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809" Namespace="calico-system" Pod="csi-node-driver-rnptn" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:01:55.003659 kernel: kauditd_printk_skb: 73 callbacks suppressed Jul 2 07:01:55.003798 kernel: audit: type=1325 audit(1719903715.000:576): table=filter:109 family=2 entries=42 op=nft_register_chain pid=4093 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:01:55.000000 audit[4093]: NETFILTER_CFG table=filter:109 family=2 entries=42 op=nft_register_chain pid=4093 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:01:55.010549 kernel: audit: type=1300 audit(1719903715.000:576): arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7ffe29266320 a2=0 a3=7ffe2926630c items=0 ppid=3442 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.010852 kernel: audit: type=1327 audit(1719903715.000:576): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:01:55.000000 audit[4093]: SYSCALL arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7ffe29266320 a2=0 a3=7ffe2926630c items=0 ppid=3442 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.000000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:01:55.012791 kubelet[2313]: E0702 07:01:55.012762 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:55.014073 kubelet[2313]: E0702 07:01:55.013934 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:55.017608 containerd[1289]: time="2024-07-02T07:01:55.017494515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:01:55.017727 containerd[1289]: time="2024-07-02T07:01:55.017632383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:55.017727 containerd[1289]: time="2024-07-02T07:01:55.017683609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:01:55.017775 containerd[1289]: time="2024-07-02T07:01:55.017717503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:55.030281 systemd-networkd[1112]: calicc76fd29239: Link UP Jul 2 07:01:55.031592 systemd-networkd[1112]: calicc76fd29239: Gained carrier Jul 2 07:01:55.032152 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calicc76fd29239: link becomes ready Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:54.925 [INFO][4046] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0 calico-kube-controllers-546b9797b- calico-system f70926b2-a2c8-485f-8201-0e6ca8908647 812 0 2024-07-02 07:01:26 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:546b9797b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-546b9797b-qgj7r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicc76fd29239 [] []}} ContainerID="aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" Namespace="calico-system" Pod="calico-kube-controllers-546b9797b-qgj7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-" Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:54.925 [INFO][4046] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" Namespace="calico-system" Pod="calico-kube-controllers-546b9797b-qgj7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:54.964 [INFO][4069] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" HandleID="k8s-pod-network.aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" Workload="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:54.976 [INFO][4069] ipam_plugin.go 264: Auto assigning IP ContainerID="aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" HandleID="k8s-pod-network.aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" Workload="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027e350), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-546b9797b-qgj7r", "timestamp":"2024-07-02 07:01:54.964655593 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:54.976 [INFO][4069] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:54.976 [INFO][4069] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:54.977 [INFO][4069] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:54.979 [INFO][4069] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" host="localhost" Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:54.992 [INFO][4069] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:54.999 [INFO][4069] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:55.002 [INFO][4069] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:55.008 [INFO][4069] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:55.008 [INFO][4069] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" host="localhost" Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:55.011 [INFO][4069] ipam.go 1685: Creating new handle: k8s-pod-network.aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001 Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:55.017 [INFO][4069] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" host="localhost" Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:55.024 [INFO][4069] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" host="localhost" Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:55.024 [INFO][4069] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" host="localhost" Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:55.024 [INFO][4069] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:01:55.045885 containerd[1289]: 2024-07-02 07:01:55.024 [INFO][4069] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" HandleID="k8s-pod-network.aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" Workload="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:01:55.046471 containerd[1289]: 2024-07-02 07:01:55.026 [INFO][4046] k8s.go 386: Populated endpoint ContainerID="aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" Namespace="calico-system" Pod="calico-kube-controllers-546b9797b-qgj7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0", GenerateName:"calico-kube-controllers-546b9797b-", Namespace:"calico-system", SelfLink:"", UID:"f70926b2-a2c8-485f-8201-0e6ca8908647", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546b9797b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-546b9797b-qgj7r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc76fd29239", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:01:55.046471 containerd[1289]: 2024-07-02 07:01:55.026 [INFO][4046] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" Namespace="calico-system" Pod="calico-kube-controllers-546b9797b-qgj7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:01:55.046471 containerd[1289]: 2024-07-02 07:01:55.026 [INFO][4046] dataplane_linux.go 68: Setting the host side veth name to calicc76fd29239 ContainerID="aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" Namespace="calico-system" Pod="calico-kube-controllers-546b9797b-qgj7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:01:55.046471 containerd[1289]: 2024-07-02 07:01:55.031 [INFO][4046] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" Namespace="calico-system" Pod="calico-kube-controllers-546b9797b-qgj7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:01:55.046471 containerd[1289]: 2024-07-02 07:01:55.032 [INFO][4046] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" Namespace="calico-system" Pod="calico-kube-controllers-546b9797b-qgj7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0", GenerateName:"calico-kube-controllers-546b9797b-", Namespace:"calico-system", SelfLink:"", UID:"f70926b2-a2c8-485f-8201-0e6ca8908647", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546b9797b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001", Pod:"calico-kube-controllers-546b9797b-qgj7r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc76fd29239", MAC:"26:49:b2:05:b7:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:01:55.046471 containerd[1289]: 2024-07-02 07:01:55.043 [INFO][4046] k8s.go 500: Wrote updated endpoint to datastore ContainerID="aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001" Namespace="calico-system" Pod="calico-kube-controllers-546b9797b-qgj7r" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:01:55.050000 audit[4128]: NETFILTER_CFG table=filter:110 family=2 entries=42 op=nft_register_chain pid=4128 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:01:55.050000 audit[4128]: SYSCALL arch=c000003e syscall=46 success=yes exit=21016 a0=3 a1=7ffe8b73b7f0 a2=0 a3=7ffe8b73b7dc items=0 ppid=3442 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.058463 kernel: audit: type=1325 audit(1719903715.050:577): table=filter:110 family=2 entries=42 op=nft_register_chain pid=4128 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:01:55.058526 kernel: audit: type=1300 audit(1719903715.050:577): arch=c000003e syscall=46 success=yes exit=21016 a0=3 a1=7ffe8b73b7f0 a2=0 a3=7ffe8b73b7dc items=0 ppid=3442 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.058546 kernel: audit: type=1327 audit(1719903715.050:577): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:01:55.050000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:01:55.062405 systemd[1]: Started cri-containerd-12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809.scope - libcontainer container 12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809. Jul 2 07:01:55.071000 audit: BPF prog-id=155 op=LOAD Jul 2 07:01:55.071000 audit: BPF prog-id=156 op=LOAD Jul 2 07:01:55.073996 kernel: audit: type=1334 audit(1719903715.071:578): prog-id=155 op=LOAD Jul 2 07:01:55.074067 kernel: audit: type=1334 audit(1719903715.071:579): prog-id=156 op=LOAD Jul 2 07:01:55.080714 kernel: audit: type=1300 audit(1719903715.071:579): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4102 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.080825 kernel: audit: type=1327 audit(1719903715.071:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132616166373062616565633061393435646566326438363939356634 Jul 2 07:01:55.071000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4102 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132616166373062616565633061393435646566326438363939356634 Jul 2 07:01:55.075617 systemd-resolved[1227]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:01:55.081146 containerd[1289]: time="2024-07-02T07:01:55.078698281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:01:55.081146 containerd[1289]: time="2024-07-02T07:01:55.078750329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:55.081146 containerd[1289]: time="2024-07-02T07:01:55.078768673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:01:55.081146 containerd[1289]: time="2024-07-02T07:01:55.078791456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:01:55.071000 audit: BPF prog-id=157 op=LOAD Jul 2 07:01:55.071000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4102 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132616166373062616565633061393435646566326438363939356634 Jul 2 07:01:55.071000 audit: BPF prog-id=157 op=UNLOAD Jul 2 07:01:55.071000 audit: BPF prog-id=156 op=UNLOAD Jul 2 07:01:55.071000 audit: BPF prog-id=158 op=LOAD Jul 2 07:01:55.071000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4102 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.071000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132616166373062616565633061393435646566326438363939356634 Jul 2 07:01:55.090683 containerd[1289]: time="2024-07-02T07:01:55.090612111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnptn,Uid:bb25da8c-03d5-4d1f-8f90-51fb2f280ed3,Namespace:calico-system,Attempt:1,} returns sandbox id \"12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809\"" Jul 2 07:01:55.093324 containerd[1289]: time="2024-07-02T07:01:55.092930041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 07:01:55.095295 systemd[1]: Started cri-containerd-aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001.scope - libcontainer container aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001. Jul 2 07:01:55.106000 audit: BPF prog-id=159 op=LOAD Jul 2 07:01:55.106000 audit: BPF prog-id=160 op=LOAD Jul 2 07:01:55.106000 audit[4159]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4148 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165663938363336373365646565306464326363306239653238313264 Jul 2 07:01:55.106000 audit: BPF prog-id=161 op=LOAD Jul 2 07:01:55.106000 audit[4159]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4148 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.106000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165663938363336373365646565306464326363306239653238313264 Jul 2 07:01:55.107000 audit: BPF prog-id=161 op=UNLOAD Jul 2 07:01:55.107000 audit: BPF prog-id=160 op=UNLOAD Jul 2 07:01:55.107000 audit: BPF prog-id=162 op=LOAD Jul 2 07:01:55.107000 audit[4159]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4148 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.107000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165663938363336373365646565306464326363306239653238313264 Jul 2 07:01:55.108952 systemd-resolved[1227]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:01:55.145038 containerd[1289]: time="2024-07-02T07:01:55.144969488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-546b9797b-qgj7r,Uid:f70926b2-a2c8-485f-8201-0e6ca8908647,Namespace:calico-system,Attempt:1,} returns sandbox id \"aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001\"" Jul 2 07:01:55.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.127:22-10.0.0.1:50958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:55.147558 systemd[1]: Started sshd@10-10.0.0.127:22-10.0.0.1:50958.service - OpenSSH per-connection server daemon (10.0.0.1:50958). Jul 2 07:01:55.178000 audit[4187]: USER_ACCT pid=4187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:55.180298 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 50958 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:01:55.180000 audit[4187]: CRED_ACQ pid=4187 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:55.180000 audit[4187]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc476d79c0 a2=3 a3=7f450acc0480 items=0 ppid=1 pid=4187 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.180000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:01:55.181862 sshd[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:01:55.185189 systemd-logind[1274]: New session 11 of user core. Jul 2 07:01:55.191348 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 07:01:55.194000 audit[4187]: USER_START pid=4187 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:55.196000 audit[4189]: CRED_ACQ pid=4189 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:55.310179 sshd[4187]: pam_unix(sshd:session): session closed for user core Jul 2 07:01:55.309000 audit[4187]: USER_END pid=4187 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:55.310000 audit[4187]: CRED_DISP pid=4187 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:01:55.312323 systemd[1]: sshd@10-10.0.0.127:22-10.0.0.1:50958.service: Deactivated successfully. Jul 2 07:01:55.313081 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 07:01:55.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.127:22-10.0.0.1:50958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:01:55.313628 systemd-logind[1274]: Session 11 logged out. Waiting for processes to exit. Jul 2 07:01:55.314304 systemd-logind[1274]: Removed session 11. Jul 2 07:01:55.617000 audit[4201]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=4201 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:55.617000 audit[4201]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc803f9ff0 a2=0 a3=7ffc803f9fdc items=0 ppid=2475 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:55.622000 audit[4201]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4201 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:01:55.622000 audit[4201]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc803f9ff0 a2=0 a3=7ffc803f9fdc items=0 ppid=2475 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:55.622000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:01:56.016541 kubelet[2313]: E0702 07:01:56.016510 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:01:56.500345 systemd-networkd[1112]: califce2ec4ddd9: Gained IPv6LL Jul 2 07:01:56.501327 systemd-networkd[1112]: calicc76fd29239: Gained IPv6LL Jul 2 07:01:58.165479 containerd[1289]: time="2024-07-02T07:01:58.165421761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:58.166488 containerd[1289]: time="2024-07-02T07:01:58.166435804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 07:01:58.171349 containerd[1289]: time="2024-07-02T07:01:58.171298699Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:58.173275 containerd[1289]: time="2024-07-02T07:01:58.173225685Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:58.174982 containerd[1289]: time="2024-07-02T07:01:58.174949931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:01:58.175758 containerd[1289]: time="2024-07-02T07:01:58.175717621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 3.082753987s" Jul 2 07:01:58.175817 containerd[1289]: time="2024-07-02T07:01:58.175755172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 07:01:58.176927 containerd[1289]: time="2024-07-02T07:01:58.176723719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 07:01:58.177719 containerd[1289]: time="2024-07-02T07:01:58.177686806Z" level=info msg="CreateContainer within sandbox \"12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 07:01:58.206606 containerd[1289]: time="2024-07-02T07:01:58.206545771Z" level=info msg="CreateContainer within sandbox \"12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"466333bc94b1b560f7d67312f9cd24272156bd0a00860cf6df07e3a10689f24e\"" Jul 2 07:01:58.207160 containerd[1289]: time="2024-07-02T07:01:58.207104269Z" level=info msg="StartContainer for \"466333bc94b1b560f7d67312f9cd24272156bd0a00860cf6df07e3a10689f24e\"" Jul 2 07:01:58.249604 systemd[1]: Started cri-containerd-466333bc94b1b560f7d67312f9cd24272156bd0a00860cf6df07e3a10689f24e.scope - libcontainer container 466333bc94b1b560f7d67312f9cd24272156bd0a00860cf6df07e3a10689f24e. Jul 2 07:01:58.260000 audit: BPF prog-id=163 op=LOAD Jul 2 07:01:58.260000 audit[4217]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4102 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:58.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436363333336263393462316235363066376436373331326639636432 Jul 2 07:01:58.260000 audit: BPF prog-id=164 op=LOAD Jul 2 07:01:58.260000 audit[4217]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4102 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:58.260000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436363333336263393462316235363066376436373331326639636432 Jul 2 07:01:58.261000 audit: BPF prog-id=164 op=UNLOAD Jul 2 07:01:58.261000 audit: BPF prog-id=163 op=UNLOAD Jul 2 07:01:58.261000 audit: BPF prog-id=165 op=LOAD Jul 2 07:01:58.261000 audit[4217]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=4102 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:01:58.261000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436363333336263393462316235363066376436373331326639636432 Jul 2 07:01:58.275044 containerd[1289]: time="2024-07-02T07:01:58.274993236Z" level=info msg="StartContainer for \"466333bc94b1b560f7d67312f9cd24272156bd0a00860cf6df07e3a10689f24e\" returns successfully" Jul 2 07:02:00.328854 systemd[1]: Started sshd@11-10.0.0.127:22-10.0.0.1:50960.service - OpenSSH per-connection server daemon (10.0.0.1:50960). Jul 2 07:02:00.331504 kernel: kauditd_printk_skb: 48 callbacks suppressed Jul 2 07:02:00.331551 kernel: audit: type=1130 audit(1719903720.327:606): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.127:22-10.0.0.1:50960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:00.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.127:22-10.0.0.1:50960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:00.374000 audit[4263]: USER_ACCT pid=4263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:00.375776 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 50960 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:00.377637 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:00.388633 kernel: audit: type=1101 audit(1719903720.374:607): pid=4263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:00.388768 kernel: audit: type=1103 audit(1719903720.375:608): pid=4263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:00.388799 kernel: audit: type=1006 audit(1719903720.375:609): pid=4263 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jul 2 07:02:00.388819 kernel: audit: type=1300 audit(1719903720.375:609): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda6c05400 a2=3 a3=7f2041a1e480 items=0 ppid=1 pid=4263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:00.375000 audit[4263]: CRED_ACQ pid=4263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:00.375000 audit[4263]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda6c05400 a2=3 a3=7f2041a1e480 items=0 ppid=1 pid=4263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:00.383937 systemd-logind[1274]: New session 12 of user core. Jul 2 07:02:00.394455 kernel: audit: type=1327 audit(1719903720.375:609): proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:00.375000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:00.394436 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 07:02:00.402000 audit[4263]: USER_START pid=4263 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:00.410749 kernel: audit: type=1105 audit(1719903720.402:610): pid=4263 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:00.410869 kernel: audit: type=1103 audit(1719903720.404:611): pid=4265 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:00.404000 audit[4265]: CRED_ACQ pid=4265 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:00.980223 containerd[1289]: time="2024-07-02T07:02:00.980159478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:02:00.985415 containerd[1289]: time="2024-07-02T07:02:00.985336412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 07:02:00.987212 containerd[1289]: time="2024-07-02T07:02:00.987181053Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:02:00.988908 containerd[1289]: time="2024-07-02T07:02:00.988878388Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:02:00.990763 containerd[1289]: time="2024-07-02T07:02:00.990729201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:02:00.991307 containerd[1289]: time="2024-07-02T07:02:00.991248234Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 2.814490762s" Jul 2 07:02:00.991375 containerd[1289]: time="2024-07-02T07:02:00.991305892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 07:02:00.993331 containerd[1289]: time="2024-07-02T07:02:00.993284576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 07:02:01.012800 containerd[1289]: time="2024-07-02T07:02:01.012748457Z" level=info msg="CreateContainer within sandbox \"aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 07:02:01.040960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146894648.mount: Deactivated successfully. Jul 2 07:02:01.056470 sshd[4263]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:01.056000 audit[4263]: USER_END pid=4263 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:01.056000 audit[4263]: CRED_DISP pid=4263 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:01.063970 kernel: audit: type=1106 audit(1719903721.056:612): pid=4263 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:01.064024 kernel: audit: type=1104 audit(1719903721.056:613): pid=4263 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:01.067595 systemd[1]: sshd@11-10.0.0.127:22-10.0.0.1:50960.service: Deactivated successfully. Jul 2 07:02:01.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.127:22-10.0.0.1:50960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:01.068134 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 07:02:01.068595 systemd-logind[1274]: Session 12 logged out. Waiting for processes to exit. Jul 2 07:02:01.076600 systemd[1]: Started sshd@12-10.0.0.127:22-10.0.0.1:50976.service - OpenSSH per-connection server daemon (10.0.0.1:50976). Jul 2 07:02:01.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.127:22-10.0.0.1:50976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:01.077370 systemd-logind[1274]: Removed session 12. Jul 2 07:02:01.101000 audit[4280]: USER_ACCT pid=4280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:01.102449 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 50976 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:01.101000 audit[4280]: CRED_ACQ pid=4280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:01.101000 audit[4280]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff845fcf30 a2=3 a3=7f5859d4b480 items=0 ppid=1 pid=4280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:01.101000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:01.103371 sshd[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:01.106510 systemd-logind[1274]: New session 13 of user core. Jul 2 07:02:01.112260 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 07:02:01.115000 audit[4280]: USER_START pid=4280 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:01.116000 audit[4282]: CRED_ACQ pid=4282 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:01.320707 containerd[1289]: time="2024-07-02T07:02:01.320656996Z" level=info msg="CreateContainer within sandbox \"aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"96b944ef7f286c62f61b1be247e25641ac07aa349929db915f4d523731e4e50a\"" Jul 2 07:02:01.321386 containerd[1289]: time="2024-07-02T07:02:01.321290365Z" level=info msg="StartContainer for \"96b944ef7f286c62f61b1be247e25641ac07aa349929db915f4d523731e4e50a\"" Jul 2 07:02:01.348348 systemd[1]: Started cri-containerd-96b944ef7f286c62f61b1be247e25641ac07aa349929db915f4d523731e4e50a.scope - libcontainer container 96b944ef7f286c62f61b1be247e25641ac07aa349929db915f4d523731e4e50a. Jul 2 07:02:01.356000 audit: BPF prog-id=166 op=LOAD Jul 2 07:02:01.357000 audit: BPF prog-id=167 op=LOAD Jul 2 07:02:01.357000 audit[4297]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4148 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:01.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936623934346566376632383663363266363162316265323437653235 Jul 2 07:02:01.357000 audit: BPF prog-id=168 op=LOAD Jul 2 07:02:01.357000 audit[4297]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4148 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:01.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936623934346566376632383663363266363162316265323437653235 Jul 2 07:02:01.357000 audit: BPF prog-id=168 op=UNLOAD Jul 2 07:02:01.357000 audit: BPF prog-id=167 op=UNLOAD Jul 2 07:02:01.357000 audit: BPF prog-id=169 op=LOAD Jul 2 07:02:01.357000 audit[4297]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4148 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:01.357000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3936623934346566376632383663363266363162316265323437653235 Jul 2 07:02:01.438000 audit[4280]: USER_END pid=4280 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:01.438000 audit[4280]: CRED_DISP pid=4280 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:01.438455 sshd[4280]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:01.448806 systemd[1]: sshd@12-10.0.0.127:22-10.0.0.1:50976.service: Deactivated successfully. Jul 2 07:02:01.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.127:22-10.0.0.1:50976 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:01.449441 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 07:02:01.449984 systemd-logind[1274]: Session 13 logged out. Waiting for processes to exit. Jul 2 07:02:01.451707 systemd[1]: Started sshd@13-10.0.0.127:22-10.0.0.1:50992.service - OpenSSH per-connection server daemon (10.0.0.1:50992). Jul 2 07:02:01.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.127:22-10.0.0.1:50992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:01.452459 systemd-logind[1274]: Removed session 13. Jul 2 07:02:01.887592 containerd[1289]: time="2024-07-02T07:02:01.887532455Z" level=info msg="StartContainer for \"96b944ef7f286c62f61b1be247e25641ac07aa349929db915f4d523731e4e50a\" returns successfully" Jul 2 07:02:01.949000 audit[4328]: USER_ACCT pid=4328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:01.950259 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 50992 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:01.950000 audit[4328]: CRED_ACQ pid=4328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:01.950000 audit[4328]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd1e3bf0a0 a2=3 a3=7f24c3e8b480 items=0 ppid=1 pid=4328 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:01.950000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:01.951815 sshd[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:01.955385 systemd-logind[1274]: New session 14 of user core. Jul 2 07:02:01.965287 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 07:02:01.967000 audit[4328]: USER_START pid=4328 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:01.969000 audit[4330]: CRED_ACQ pid=4330 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:02.242327 sshd[4328]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:02.242000 audit[4328]: USER_END pid=4328 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:02.243000 audit[4328]: CRED_DISP pid=4328 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:02.246208 systemd[1]: sshd@13-10.0.0.127:22-10.0.0.1:50992.service: Deactivated successfully. Jul 2 07:02:02.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.127:22-10.0.0.1:50992 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:02.247265 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 07:02:02.247918 systemd-logind[1274]: Session 14 logged out. Waiting for processes to exit. Jul 2 07:02:02.248974 systemd-logind[1274]: Removed session 14. Jul 2 07:02:02.256834 kubelet[2313]: I0702 07:02:02.256402 2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-546b9797b-qgj7r" podStartSLOduration=30.410632406 podStartE2EDuration="36.256382229s" podCreationTimestamp="2024-07-02 07:01:26 +0000 UTC" firstStartedPulling="2024-07-02 07:01:55.146604748 +0000 UTC m=+48.361592686" lastFinishedPulling="2024-07-02 07:02:00.992354571 +0000 UTC m=+54.207342509" observedRunningTime="2024-07-02 07:02:02.081841704 +0000 UTC m=+55.296829642" watchObservedRunningTime="2024-07-02 07:02:02.256382229 +0000 UTC m=+55.471370157" Jul 2 07:02:02.467139 kubelet[2313]: E0702 07:02:02.467093 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:02:02.482253 systemd[1]: run-containerd-runc-k8s.io-4d7140d2a7eabf32be8d4233ecf772468801e97b42f14f4b8913e55aa23c449c-runc.v20Xal.mount: Deactivated successfully. Jul 2 07:02:03.034814 kubelet[2313]: E0702 07:02:03.034784 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:02:03.184000 audit[2175]: AVC avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:02:03.184000 audit[2175]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000e36fe0 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:02:03.184000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:02:03.185000 audit[2175]: AVC avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:02:03.185000 audit[2175]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c00150c690 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:02:03.185000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:02:04.004000 audit[2201]: AVC avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:02:04.004000 audit[2201]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c0073088c0 a2=fc6 a3=0 items=0 ppid=2022 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c515,c977 key=(null) Jul 2 07:02:04.004000 audit[2201]: AVC avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:02:04.004000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313237002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jul 2 07:02:04.004000 audit[2201]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00ee9a240 a2=fc6 a3=0 items=0 ppid=2022 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c515,c977 key=(null) Jul 2 07:02:04.004000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313237002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jul 2 07:02:04.004000 audit[2201]: AVC avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7757 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:02:04.004000 audit[2201]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=70 a1=c00e3c9f20 a2=fc6 a3=0 items=0 ppid=2022 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c515,c977 key=(null) Jul 2 07:02:04.004000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313237002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jul 2 07:02:04.021000 audit[2201]: AVC avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7763 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:02:04.021000 audit[2201]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c00e3c9f50 a2=fc6 a3=0 items=0 ppid=2022 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c515,c977 key=(null) Jul 2 07:02:04.021000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313237002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jul 2 07:02:04.022000 audit[2201]: AVC avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:02:04.022000 audit[2201]: AVC avc: denied { watch } for pid=2201 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c515,c977 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:02:04.022000 audit[2201]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c0101fdf50 a2=fc6 a3=0 items=0 ppid=2022 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c515,c977 key=(null) Jul 2 07:02:04.022000 audit[2201]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6c a1=c0073088e0 a2=fc6 a3=0 items=0 ppid=2022 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c515,c977 key=(null) Jul 2 07:02:04.022000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313237002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jul 2 07:02:04.022000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E313237002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75626572 Jul 2 07:02:05.822247 containerd[1289]: time="2024-07-02T07:02:05.821704862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:02:05.823349 containerd[1289]: time="2024-07-02T07:02:05.823301859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 07:02:05.825086 containerd[1289]: time="2024-07-02T07:02:05.825058444Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:02:05.828372 containerd[1289]: time="2024-07-02T07:02:05.828330659Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:02:05.833787 containerd[1289]: time="2024-07-02T07:02:05.833699908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:02:05.835031 containerd[1289]: time="2024-07-02T07:02:05.834988970Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 4.841650823s" Jul 2 07:02:05.835083 containerd[1289]: time="2024-07-02T07:02:05.835033235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 07:02:05.837038 containerd[1289]: time="2024-07-02T07:02:05.837011699Z" level=info msg="CreateContainer within sandbox \"12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 07:02:05.900234 containerd[1289]: time="2024-07-02T07:02:05.900181505Z" level=info msg="CreateContainer within sandbox \"12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8f0a0405ce906d0453c4daea6aac4c5acaacdb442232324117796466eee0abf5\"" Jul 2 07:02:05.900757 containerd[1289]: time="2024-07-02T07:02:05.900727470Z" level=info msg="StartContainer for \"8f0a0405ce906d0453c4daea6aac4c5acaacdb442232324117796466eee0abf5\"" Jul 2 07:02:05.934577 systemd[1]: Started cri-containerd-8f0a0405ce906d0453c4daea6aac4c5acaacdb442232324117796466eee0abf5.scope - libcontainer container 8f0a0405ce906d0453c4daea6aac4c5acaacdb442232324117796466eee0abf5. Jul 2 07:02:05.949961 kernel: kauditd_printk_skb: 59 callbacks suppressed Jul 2 07:02:05.950083 kernel: audit: type=1334 audit(1719903725.947:647): prog-id=170 op=LOAD Jul 2 07:02:05.947000 audit: BPF prog-id=170 op=LOAD Jul 2 07:02:05.947000 audit[4419]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4102 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:05.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866306130343035636539303664303435336334646165613661616334 Jul 2 07:02:05.959080 kernel: audit: type=1300 audit(1719903725.947:647): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=4102 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:05.959185 kernel: audit: type=1327 audit(1719903725.947:647): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866306130343035636539303664303435336334646165613661616334 Jul 2 07:02:05.947000 audit: BPF prog-id=171 op=LOAD Jul 2 07:02:05.961140 kernel: audit: type=1334 audit(1719903725.947:648): prog-id=171 op=LOAD Jul 2 07:02:05.947000 audit[4419]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4102 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:05.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866306130343035636539303664303435336334646165613661616334 Jul 2 07:02:05.969598 kernel: audit: type=1300 audit(1719903725.947:648): arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=4102 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:05.969706 kernel: audit: type=1327 audit(1719903725.947:648): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866306130343035636539303664303435336334646165613661616334 Jul 2 07:02:05.947000 audit: BPF prog-id=171 op=UNLOAD Jul 2 07:02:05.971144 kernel: audit: type=1334 audit(1719903725.947:649): prog-id=171 op=UNLOAD Jul 2 07:02:05.971201 kernel: audit: type=1334 audit(1719903725.947:650): prog-id=170 op=UNLOAD Jul 2 07:02:05.947000 audit: BPF prog-id=170 op=UNLOAD Jul 2 07:02:05.947000 audit: BPF prog-id=172 op=LOAD Jul 2 07:02:05.974300 kernel: audit: type=1334 audit(1719903725.947:651): prog-id=172 op=LOAD Jul 2 07:02:05.974347 kernel: audit: type=1300 audit(1719903725.947:651): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=4102 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:05.947000 audit[4419]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=4102 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:05.947000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3866306130343035636539303664303435336334646165613661616334 Jul 2 07:02:06.054683 containerd[1289]: time="2024-07-02T07:02:06.054627681Z" level=info msg="StartContainer for \"8f0a0405ce906d0453c4daea6aac4c5acaacdb442232324117796466eee0abf5\" returns successfully" Jul 2 07:02:06.511000 audit[2175]: AVC avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:02:06.511000 audit[2175]: AVC avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:02:06.511000 audit[2175]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d77060 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:02:06.511000 audit[2175]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000b82d20 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:02:06.511000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:02:06.511000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:02:06.511000 audit[2175]: AVC avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:02:06.511000 audit[2175]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c001e6ada0 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:02:06.511000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:02:06.512000 audit[2175]: AVC avc: denied { watch } for pid=2175 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c25,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 07:02:06.512000 audit[2175]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000b82d60 a2=fc6 a3=0 items=0 ppid=2023 pid=2175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c25,c543 key=(null) Jul 2 07:02:06.512000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 07:02:06.856102 containerd[1289]: time="2024-07-02T07:02:06.855992307Z" level=info msg="StopPodSandbox for \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\"" Jul 2 07:02:06.893146 systemd[1]: run-containerd-runc-k8s.io-8f0a0405ce906d0453c4daea6aac4c5acaacdb442232324117796466eee0abf5-runc.0vWlHy.mount: Deactivated successfully. Jul 2 07:02:06.921918 kubelet[2313]: I0702 07:02:06.921879 2313 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 07:02:06.921918 kubelet[2313]: I0702 07:02:06.921910 2313 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 07:02:07.057337 containerd[1289]: 2024-07-02 07:02:06.937 [WARNING][4467] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0", GenerateName:"calico-kube-controllers-546b9797b-", Namespace:"calico-system", SelfLink:"", UID:"f70926b2-a2c8-485f-8201-0e6ca8908647", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546b9797b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001", Pod:"calico-kube-controllers-546b9797b-qgj7r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc76fd29239", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:02:07.057337 containerd[1289]: 2024-07-02 07:02:06.938 [INFO][4467] k8s.go 608: Cleaning up netns ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:02:07.057337 containerd[1289]: 2024-07-02 07:02:06.938 [INFO][4467] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" iface="eth0" netns="" Jul 2 07:02:07.057337 containerd[1289]: 2024-07-02 07:02:06.938 [INFO][4467] k8s.go 615: Releasing IP address(es) ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:02:07.057337 containerd[1289]: 2024-07-02 07:02:06.938 [INFO][4467] utils.go 188: Calico CNI releasing IP address ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:02:07.057337 containerd[1289]: 2024-07-02 07:02:06.955 [INFO][4476] ipam_plugin.go 411: Releasing address using handleID ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" HandleID="k8s-pod-network.53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Workload="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:02:07.057337 containerd[1289]: 2024-07-02 07:02:06.955 [INFO][4476] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:02:07.057337 containerd[1289]: 2024-07-02 07:02:06.955 [INFO][4476] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:02:07.057337 containerd[1289]: 2024-07-02 07:02:07.019 [WARNING][4476] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" HandleID="k8s-pod-network.53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Workload="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:02:07.057337 containerd[1289]: 2024-07-02 07:02:07.020 [INFO][4476] ipam_plugin.go 439: Releasing address using workloadID ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" HandleID="k8s-pod-network.53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Workload="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:02:07.057337 containerd[1289]: 2024-07-02 07:02:07.053 [INFO][4476] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:02:07.057337 containerd[1289]: 2024-07-02 07:02:07.055 [INFO][4467] k8s.go 621: Teardown processing complete. ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:02:07.058141 containerd[1289]: time="2024-07-02T07:02:07.058002169Z" level=info msg="TearDown network for sandbox \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\" successfully" Jul 2 07:02:07.058141 containerd[1289]: time="2024-07-02T07:02:07.058044280Z" level=info msg="StopPodSandbox for \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\" returns successfully" Jul 2 07:02:07.058657 containerd[1289]: time="2024-07-02T07:02:07.058608650Z" level=info msg="RemovePodSandbox for \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\"" Jul 2 07:02:07.070271 containerd[1289]: time="2024-07-02T07:02:07.061462848Z" level=info msg="Forcibly stopping sandbox \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\"" Jul 2 07:02:07.128271 containerd[1289]: 2024-07-02 07:02:07.100 [WARNING][4498] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0", GenerateName:"calico-kube-controllers-546b9797b-", Namespace:"calico-system", SelfLink:"", UID:"f70926b2-a2c8-485f-8201-0e6ca8908647", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"546b9797b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aef9863673edee0dd2cc0b9e2812ddd3052371572840ba7ffc6f3e7bb694e001", Pod:"calico-kube-controllers-546b9797b-qgj7r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc76fd29239", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:02:07.128271 containerd[1289]: 2024-07-02 07:02:07.101 [INFO][4498] k8s.go 608: Cleaning up netns ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:02:07.128271 containerd[1289]: 2024-07-02 07:02:07.101 [INFO][4498] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" iface="eth0" netns="" Jul 2 07:02:07.128271 containerd[1289]: 2024-07-02 07:02:07.101 [INFO][4498] k8s.go 615: Releasing IP address(es) ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:02:07.128271 containerd[1289]: 2024-07-02 07:02:07.101 [INFO][4498] utils.go 188: Calico CNI releasing IP address ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:02:07.128271 containerd[1289]: 2024-07-02 07:02:07.118 [INFO][4506] ipam_plugin.go 411: Releasing address using handleID ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" HandleID="k8s-pod-network.53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Workload="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:02:07.128271 containerd[1289]: 2024-07-02 07:02:07.118 [INFO][4506] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:02:07.128271 containerd[1289]: 2024-07-02 07:02:07.118 [INFO][4506] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:02:07.128271 containerd[1289]: 2024-07-02 07:02:07.124 [WARNING][4506] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" HandleID="k8s-pod-network.53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Workload="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:02:07.128271 containerd[1289]: 2024-07-02 07:02:07.124 [INFO][4506] ipam_plugin.go 439: Releasing address using workloadID ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" HandleID="k8s-pod-network.53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Workload="localhost-k8s-calico--kube--controllers--546b9797b--qgj7r-eth0" Jul 2 07:02:07.128271 containerd[1289]: 2024-07-02 07:02:07.125 [INFO][4506] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:02:07.128271 containerd[1289]: 2024-07-02 07:02:07.127 [INFO][4498] k8s.go 621: Teardown processing complete. ContainerID="53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551" Jul 2 07:02:07.128271 containerd[1289]: time="2024-07-02T07:02:07.128242423Z" level=info msg="TearDown network for sandbox \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\" successfully" Jul 2 07:02:07.254031 systemd[1]: Started sshd@14-10.0.0.127:22-10.0.0.1:51046.service - OpenSSH per-connection server daemon (10.0.0.1:51046). Jul 2 07:02:07.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.127:22-10.0.0.1:51046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:07.778000 audit[4516]: USER_ACCT pid=4516 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:07.779382 sshd[4516]: Accepted publickey for core from 10.0.0.1 port 51046 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:07.778000 audit[4516]: CRED_ACQ pid=4516 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:07.779000 audit[4516]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd91baedb0 a2=3 a3=7fcf8b0c7480 items=0 ppid=1 pid=4516 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:07.779000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:07.780608 sshd[4516]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:07.784544 systemd-logind[1274]: New session 15 of user core. Jul 2 07:02:07.794302 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 07:02:07.798000 audit[4516]: USER_START pid=4516 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:07.800000 audit[4523]: CRED_ACQ pid=4523 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:07.817042 containerd[1289]: time="2024-07-02T07:02:07.816997140Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 07:02:07.817293 containerd[1289]: time="2024-07-02T07:02:07.817071063Z" level=info msg="RemovePodSandbox \"53b84b4e265802af2800ee114a99d00e2f1b426ddbe146321ff543648e7ed551\" returns successfully" Jul 2 07:02:07.817601 containerd[1289]: time="2024-07-02T07:02:07.817577009Z" level=info msg="StopPodSandbox for \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\"" Jul 2 07:02:08.089278 containerd[1289]: 2024-07-02 07:02:07.868 [WARNING][4539] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0815c98c-0645-455a-b2ea-3705ee7d083c", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee", Pod:"coredns-7db6d8ff4d-d6xh4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1b775950e0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:02:08.089278 containerd[1289]: 2024-07-02 07:02:07.869 [INFO][4539] k8s.go 608: Cleaning up netns ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:02:08.089278 containerd[1289]: 2024-07-02 07:02:07.869 [INFO][4539] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" iface="eth0" netns="" Jul 2 07:02:08.089278 containerd[1289]: 2024-07-02 07:02:07.869 [INFO][4539] k8s.go 615: Releasing IP address(es) ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:02:08.089278 containerd[1289]: 2024-07-02 07:02:07.869 [INFO][4539] utils.go 188: Calico CNI releasing IP address ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:02:08.089278 containerd[1289]: 2024-07-02 07:02:07.891 [INFO][4554] ipam_plugin.go 411: Releasing address using handleID ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" HandleID="k8s-pod-network.08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Workload="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:02:08.089278 containerd[1289]: 2024-07-02 07:02:07.891 [INFO][4554] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:02:08.089278 containerd[1289]: 2024-07-02 07:02:07.891 [INFO][4554] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:02:08.089278 containerd[1289]: 2024-07-02 07:02:07.932 [WARNING][4554] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" HandleID="k8s-pod-network.08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Workload="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:02:08.089278 containerd[1289]: 2024-07-02 07:02:07.932 [INFO][4554] ipam_plugin.go 439: Releasing address using workloadID ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" HandleID="k8s-pod-network.08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Workload="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:02:08.089278 containerd[1289]: 2024-07-02 07:02:08.085 [INFO][4554] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:02:08.089278 containerd[1289]: 2024-07-02 07:02:08.087 [INFO][4539] k8s.go 621: Teardown processing complete. ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:02:08.089278 containerd[1289]: time="2024-07-02T07:02:08.088697122Z" level=info msg="TearDown network for sandbox \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\" successfully" Jul 2 07:02:08.089278 containerd[1289]: time="2024-07-02T07:02:08.088738973Z" level=info msg="StopPodSandbox for \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\" returns successfully" Jul 2 07:02:08.090260 containerd[1289]: time="2024-07-02T07:02:08.089332988Z" level=info msg="RemovePodSandbox for \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\"" Jul 2 07:02:08.090260 containerd[1289]: time="2024-07-02T07:02:08.089367485Z" level=info msg="Forcibly stopping sandbox \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\"" Jul 2 07:02:08.091384 sshd[4516]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:08.091000 audit[4516]: USER_END pid=4516 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:08.091000 audit[4516]: CRED_DISP pid=4516 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:08.094093 systemd[1]: sshd@14-10.0.0.127:22-10.0.0.1:51046.service: Deactivated successfully. Jul 2 07:02:08.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.127:22-10.0.0.1:51046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:08.095018 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 07:02:08.095960 systemd-logind[1274]: Session 15 logged out. Waiting for processes to exit. Jul 2 07:02:08.096809 systemd-logind[1274]: Removed session 15. Jul 2 07:02:08.161009 containerd[1289]: 2024-07-02 07:02:08.124 [WARNING][4579] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0815c98c-0645-455a-b2ea-3705ee7d083c", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7471d866f8f3ece75d4384c3c94286f4815e182658976105e190ab8e47e5daee", Pod:"coredns-7db6d8ff4d-d6xh4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1b775950e0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:02:08.161009 containerd[1289]: 2024-07-02 07:02:08.124 [INFO][4579] k8s.go 608: Cleaning up netns ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:02:08.161009 containerd[1289]: 2024-07-02 07:02:08.124 [INFO][4579] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" iface="eth0" netns="" Jul 2 07:02:08.161009 containerd[1289]: 2024-07-02 07:02:08.124 [INFO][4579] k8s.go 615: Releasing IP address(es) ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:02:08.161009 containerd[1289]: 2024-07-02 07:02:08.124 [INFO][4579] utils.go 188: Calico CNI releasing IP address ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:02:08.161009 containerd[1289]: 2024-07-02 07:02:08.143 [INFO][4586] ipam_plugin.go 411: Releasing address using handleID ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" HandleID="k8s-pod-network.08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Workload="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:02:08.161009 containerd[1289]: 2024-07-02 07:02:08.144 [INFO][4586] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:02:08.161009 containerd[1289]: 2024-07-02 07:02:08.144 [INFO][4586] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:02:08.161009 containerd[1289]: 2024-07-02 07:02:08.156 [WARNING][4586] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" HandleID="k8s-pod-network.08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Workload="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:02:08.161009 containerd[1289]: 2024-07-02 07:02:08.156 [INFO][4586] ipam_plugin.go 439: Releasing address using workloadID ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" HandleID="k8s-pod-network.08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Workload="localhost-k8s-coredns--7db6d8ff4d--d6xh4-eth0" Jul 2 07:02:08.161009 containerd[1289]: 2024-07-02 07:02:08.158 [INFO][4586] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:02:08.161009 containerd[1289]: 2024-07-02 07:02:08.159 [INFO][4579] k8s.go 621: Teardown processing complete. ContainerID="08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67" Jul 2 07:02:08.161496 containerd[1289]: time="2024-07-02T07:02:08.161051749Z" level=info msg="TearDown network for sandbox \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\" successfully" Jul 2 07:02:08.263508 containerd[1289]: time="2024-07-02T07:02:08.263443388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 07:02:08.263705 containerd[1289]: time="2024-07-02T07:02:08.263527530Z" level=info msg="RemovePodSandbox \"08431734944c0a3bf0b3204dff7e9275a80307b5079c8e25523e22f7fc14be67\" returns successfully" Jul 2 07:02:08.264071 containerd[1289]: time="2024-07-02T07:02:08.264038325Z" level=info msg="StopPodSandbox for \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\"" Jul 2 07:02:08.326461 containerd[1289]: 2024-07-02 07:02:08.299 [WARNING][4610] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ffd4bb71-e349-4c7c-bd03-9422990b17d3", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0", Pod:"coredns-7db6d8ff4d-kddkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali285cc9ffff5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:02:08.326461 containerd[1289]: 2024-07-02 07:02:08.299 [INFO][4610] k8s.go 608: Cleaning up netns ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:02:08.326461 containerd[1289]: 2024-07-02 07:02:08.299 [INFO][4610] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" iface="eth0" netns="" Jul 2 07:02:08.326461 containerd[1289]: 2024-07-02 07:02:08.299 [INFO][4610] k8s.go 615: Releasing IP address(es) ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:02:08.326461 containerd[1289]: 2024-07-02 07:02:08.299 [INFO][4610] utils.go 188: Calico CNI releasing IP address ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:02:08.326461 containerd[1289]: 2024-07-02 07:02:08.315 [INFO][4617] ipam_plugin.go 411: Releasing address using handleID ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" HandleID="k8s-pod-network.c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Workload="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:02:08.326461 containerd[1289]: 2024-07-02 07:02:08.315 [INFO][4617] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:02:08.326461 containerd[1289]: 2024-07-02 07:02:08.315 [INFO][4617] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:02:08.326461 containerd[1289]: 2024-07-02 07:02:08.322 [WARNING][4617] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" HandleID="k8s-pod-network.c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Workload="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:02:08.326461 containerd[1289]: 2024-07-02 07:02:08.322 [INFO][4617] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" HandleID="k8s-pod-network.c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Workload="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:02:08.326461 containerd[1289]: 2024-07-02 07:02:08.323 [INFO][4617] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:02:08.326461 containerd[1289]: 2024-07-02 07:02:08.325 [INFO][4610] k8s.go 621: Teardown processing complete. ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:02:08.327042 containerd[1289]: time="2024-07-02T07:02:08.326486418Z" level=info msg="TearDown network for sandbox \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\" successfully" Jul 2 07:02:08.327042 containerd[1289]: time="2024-07-02T07:02:08.326523940Z" level=info msg="StopPodSandbox for \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\" returns successfully" Jul 2 07:02:08.327141 containerd[1289]: time="2024-07-02T07:02:08.327104190Z" level=info msg="RemovePodSandbox for \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\"" Jul 2 07:02:08.327233 containerd[1289]: time="2024-07-02T07:02:08.327180066Z" level=info msg="Forcibly stopping sandbox \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\"" Jul 2 07:02:08.383697 containerd[1289]: 2024-07-02 07:02:08.357 [WARNING][4649] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ffd4bb71-e349-4c7c-bd03-9422990b17d3", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 21, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79208d1e14840e6837499e4f16dd40003cde38ad208d10c24c99dc2f8d5054a0", Pod:"coredns-7db6d8ff4d-kddkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali285cc9ffff5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:02:08.383697 containerd[1289]: 2024-07-02 07:02:08.357 [INFO][4649] k8s.go 608: Cleaning up netns ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:02:08.383697 containerd[1289]: 2024-07-02 07:02:08.357 [INFO][4649] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" iface="eth0" netns="" Jul 2 07:02:08.383697 containerd[1289]: 2024-07-02 07:02:08.357 [INFO][4649] k8s.go 615: Releasing IP address(es) ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:02:08.383697 containerd[1289]: 2024-07-02 07:02:08.357 [INFO][4649] utils.go 188: Calico CNI releasing IP address ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:02:08.383697 containerd[1289]: 2024-07-02 07:02:08.375 [INFO][4657] ipam_plugin.go 411: Releasing address using handleID ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" HandleID="k8s-pod-network.c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Workload="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:02:08.383697 containerd[1289]: 2024-07-02 07:02:08.375 [INFO][4657] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:02:08.383697 containerd[1289]: 2024-07-02 07:02:08.375 [INFO][4657] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:02:08.383697 containerd[1289]: 2024-07-02 07:02:08.379 [WARNING][4657] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" HandleID="k8s-pod-network.c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Workload="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:02:08.383697 containerd[1289]: 2024-07-02 07:02:08.379 [INFO][4657] ipam_plugin.go 439: Releasing address using workloadID ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" HandleID="k8s-pod-network.c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Workload="localhost-k8s-coredns--7db6d8ff4d--kddkc-eth0" Jul 2 07:02:08.383697 containerd[1289]: 2024-07-02 07:02:08.380 [INFO][4657] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:02:08.383697 containerd[1289]: 2024-07-02 07:02:08.381 [INFO][4649] k8s.go 621: Teardown processing complete. ContainerID="c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793" Jul 2 07:02:08.383697 containerd[1289]: time="2024-07-02T07:02:08.383159475Z" level=info msg="TearDown network for sandbox \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\" successfully" Jul 2 07:02:08.521848 containerd[1289]: time="2024-07-02T07:02:08.521765098Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 07:02:08.522041 containerd[1289]: time="2024-07-02T07:02:08.521862496Z" level=info msg="RemovePodSandbox \"c3fd70e22ed39f33f13f73aa8a2eded243ce2e959df51c15b08c8f1ef1ac4793\" returns successfully" Jul 2 07:02:08.522389 containerd[1289]: time="2024-07-02T07:02:08.522365406Z" level=info msg="StopPodSandbox for \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\"" Jul 2 07:02:08.576073 containerd[1289]: 2024-07-02 07:02:08.550 [WARNING][4679] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rnptn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb25da8c-03d5-4d1f-8f90-51fb2f280ed3", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809", Pod:"csi-node-driver-rnptn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califce2ec4ddd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:02:08.576073 containerd[1289]: 2024-07-02 07:02:08.550 [INFO][4679] k8s.go 608: Cleaning up netns ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:02:08.576073 containerd[1289]: 2024-07-02 07:02:08.550 [INFO][4679] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" iface="eth0" netns="" Jul 2 07:02:08.576073 containerd[1289]: 2024-07-02 07:02:08.550 [INFO][4679] k8s.go 615: Releasing IP address(es) ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:02:08.576073 containerd[1289]: 2024-07-02 07:02:08.550 [INFO][4679] utils.go 188: Calico CNI releasing IP address ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:02:08.576073 containerd[1289]: 2024-07-02 07:02:08.568 [INFO][4687] ipam_plugin.go 411: Releasing address using handleID ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" HandleID="k8s-pod-network.694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Workload="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:02:08.576073 containerd[1289]: 2024-07-02 07:02:08.568 [INFO][4687] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:02:08.576073 containerd[1289]: 2024-07-02 07:02:08.568 [INFO][4687] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:02:08.576073 containerd[1289]: 2024-07-02 07:02:08.573 [WARNING][4687] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" HandleID="k8s-pod-network.694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Workload="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:02:08.576073 containerd[1289]: 2024-07-02 07:02:08.573 [INFO][4687] ipam_plugin.go 439: Releasing address using workloadID ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" HandleID="k8s-pod-network.694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Workload="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:02:08.576073 containerd[1289]: 2024-07-02 07:02:08.574 [INFO][4687] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:02:08.576073 containerd[1289]: 2024-07-02 07:02:08.575 [INFO][4679] k8s.go 621: Teardown processing complete. ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:02:08.576508 containerd[1289]: time="2024-07-02T07:02:08.576113081Z" level=info msg="TearDown network for sandbox \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\" successfully" Jul 2 07:02:08.576508 containerd[1289]: time="2024-07-02T07:02:08.576164630Z" level=info msg="StopPodSandbox for \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\" returns successfully" Jul 2 07:02:08.576625 containerd[1289]: time="2024-07-02T07:02:08.576598897Z" level=info msg="RemovePodSandbox for \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\"" Jul 2 07:02:08.576668 containerd[1289]: time="2024-07-02T07:02:08.576630489Z" level=info msg="Forcibly stopping sandbox \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\"" Jul 2 07:02:08.722496 containerd[1289]: 2024-07-02 07:02:08.606 [WARNING][4709] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rnptn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb25da8c-03d5-4d1f-8f90-51fb2f280ed3", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 1, 26, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12aaf70baeec0a945def2d86995f481badb3e008d1af5b50046d4b4b7d2eb809", Pod:"csi-node-driver-rnptn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"califce2ec4ddd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:02:08.722496 containerd[1289]: 2024-07-02 07:02:08.606 [INFO][4709] k8s.go 608: Cleaning up netns ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:02:08.722496 containerd[1289]: 2024-07-02 07:02:08.606 [INFO][4709] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" iface="eth0" netns="" Jul 2 07:02:08.722496 containerd[1289]: 2024-07-02 07:02:08.606 [INFO][4709] k8s.go 615: Releasing IP address(es) ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:02:08.722496 containerd[1289]: 2024-07-02 07:02:08.606 [INFO][4709] utils.go 188: Calico CNI releasing IP address ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:02:08.722496 containerd[1289]: 2024-07-02 07:02:08.631 [INFO][4717] ipam_plugin.go 411: Releasing address using handleID ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" HandleID="k8s-pod-network.694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Workload="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:02:08.722496 containerd[1289]: 2024-07-02 07:02:08.631 [INFO][4717] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:02:08.722496 containerd[1289]: 2024-07-02 07:02:08.631 [INFO][4717] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:02:08.722496 containerd[1289]: 2024-07-02 07:02:08.665 [WARNING][4717] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" HandleID="k8s-pod-network.694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Workload="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:02:08.722496 containerd[1289]: 2024-07-02 07:02:08.665 [INFO][4717] ipam_plugin.go 439: Releasing address using workloadID ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" HandleID="k8s-pod-network.694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Workload="localhost-k8s-csi--node--driver--rnptn-eth0" Jul 2 07:02:08.722496 containerd[1289]: 2024-07-02 07:02:08.719 [INFO][4717] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:02:08.722496 containerd[1289]: 2024-07-02 07:02:08.721 [INFO][4709] k8s.go 621: Teardown processing complete. ContainerID="694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813" Jul 2 07:02:08.722496 containerd[1289]: time="2024-07-02T07:02:08.722442588Z" level=info msg="TearDown network for sandbox \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\" successfully" Jul 2 07:02:08.849084 containerd[1289]: time="2024-07-02T07:02:08.849037460Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 07:02:08.849405 containerd[1289]: time="2024-07-02T07:02:08.849373007Z" level=info msg="RemovePodSandbox \"694d8e272d6585a54f701da91696acd1e92f23ecaa0ed6f87ef3db02ad323813\" returns successfully" Jul 2 07:02:13.101854 systemd[1]: Started sshd@15-10.0.0.127:22-10.0.0.1:54420.service - OpenSSH per-connection server daemon (10.0.0.1:54420). Jul 2 07:02:13.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.127:22-10.0.0.1:54420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:13.102913 kernel: kauditd_printk_skb: 24 callbacks suppressed Jul 2 07:02:13.102972 kernel: audit: type=1130 audit(1719903733.101:665): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.127:22-10.0.0.1:54420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:13.131000 audit[4751]: USER_ACCT pid=4751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:13.132073 sshd[4751]: Accepted publickey for core from 10.0.0.1 port 54420 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:13.133116 sshd[4751]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:13.131000 audit[4751]: CRED_ACQ pid=4751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:13.137530 systemd-logind[1274]: New session 16 of user core. Jul 2 07:02:13.139553 kernel: audit: type=1101 audit(1719903733.131:666): pid=4751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:13.139617 kernel: audit: type=1103 audit(1719903733.131:667): pid=4751 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:13.139648 kernel: audit: type=1006 audit(1719903733.131:668): pid=4751 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 2 07:02:13.131000 audit[4751]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde4539820 a2=3 a3=7f464947e480 items=0 ppid=1 pid=4751 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:13.145633 kernel: audit: type=1300 audit(1719903733.131:668): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde4539820 a2=3 a3=7f464947e480 items=0 ppid=1 pid=4751 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:13.145678 kernel: audit: type=1327 audit(1719903733.131:668): proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:13.131000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:13.148350 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 07:02:13.153000 audit[4751]: USER_START pid=4751 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:13.155000 audit[4753]: CRED_ACQ pid=4753 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:13.161150 kernel: audit: type=1105 audit(1719903733.153:669): pid=4751 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:13.161200 kernel: audit: type=1103 audit(1719903733.155:670): pid=4753 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:13.257754 sshd[4751]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:13.258000 audit[4751]: USER_END pid=4751 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:13.260388 systemd[1]: sshd@15-10.0.0.127:22-10.0.0.1:54420.service: Deactivated successfully. Jul 2 07:02:13.261385 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 07:02:13.262009 systemd-logind[1274]: Session 16 logged out. Waiting for processes to exit. Jul 2 07:02:13.258000 audit[4751]: CRED_DISP pid=4751 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:13.262861 systemd-logind[1274]: Removed session 16. Jul 2 07:02:13.265916 kernel: audit: type=1106 audit(1719903733.258:671): pid=4751 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:13.266030 kernel: audit: type=1104 audit(1719903733.258:672): pid=4751 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:13.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.127:22-10.0.0.1:54420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:18.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.127:22-10.0.0.1:54426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:18.269756 systemd[1]: Started sshd@16-10.0.0.127:22-10.0.0.1:54426.service - OpenSSH per-connection server daemon (10.0.0.1:54426). Jul 2 07:02:18.270851 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:02:18.270915 kernel: audit: type=1130 audit(1719903738.268:674): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.127:22-10.0.0.1:54426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:18.295000 audit[4766]: USER_ACCT pid=4766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:18.296921 sshd[4766]: Accepted publickey for core from 10.0.0.1 port 54426 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:18.297987 sshd[4766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:18.296000 audit[4766]: CRED_ACQ pid=4766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:18.301786 systemd-logind[1274]: New session 17 of user core. Jul 2 07:02:18.303263 kernel: audit: type=1101 audit(1719903738.295:675): pid=4766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:18.303357 kernel: audit: type=1103 audit(1719903738.296:676): pid=4766 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:18.303394 kernel: audit: type=1006 audit(1719903738.296:677): pid=4766 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jul 2 07:02:18.296000 audit[4766]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8ef4dae0 a2=3 a3=7f6812f30480 items=0 ppid=1 pid=4766 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:18.444227 kernel: audit: type=1300 audit(1719903738.296:677): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8ef4dae0 a2=3 a3=7f6812f30480 items=0 ppid=1 pid=4766 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:18.296000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:18.445631 kernel: audit: type=1327 audit(1719903738.296:677): proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:18.450350 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 07:02:18.452000 audit[4766]: USER_START pid=4766 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:18.517178 kernel: audit: type=1105 audit(1719903738.452:678): pid=4766 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:18.517298 kernel: audit: type=1103 audit(1719903738.454:679): pid=4768 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:18.454000 audit[4768]: CRED_ACQ pid=4768 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:18.977076 sshd[4766]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:18.977000 audit[4766]: USER_END pid=4766 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:18.977000 audit[4766]: CRED_DISP pid=4766 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:18.984423 kernel: audit: type=1106 audit(1719903738.977:680): pid=4766 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:18.984484 kernel: audit: type=1104 audit(1719903738.977:681): pid=4766 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:18.990605 systemd[1]: sshd@16-10.0.0.127:22-10.0.0.1:54426.service: Deactivated successfully. Jul 2 07:02:18.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.127:22-10.0.0.1:54426 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:18.991191 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 07:02:18.991709 systemd-logind[1274]: Session 17 logged out. Waiting for processes to exit. Jul 2 07:02:18.996738 systemd[1]: Started sshd@17-10.0.0.127:22-10.0.0.1:54428.service - OpenSSH per-connection server daemon (10.0.0.1:54428). Jul 2 07:02:18.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.127:22-10.0.0.1:54428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:18.997667 systemd-logind[1274]: Removed session 17. Jul 2 07:02:19.025000 audit[4779]: USER_ACCT pid=4779 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:19.026938 sshd[4779]: Accepted publickey for core from 10.0.0.1 port 54428 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:19.026000 audit[4779]: CRED_ACQ pid=4779 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:19.026000 audit[4779]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1807f1b0 a2=3 a3=7f74d5e24480 items=0 ppid=1 pid=4779 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:19.026000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:19.028261 sshd[4779]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:19.031949 systemd-logind[1274]: New session 18 of user core. Jul 2 07:02:19.038357 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 07:02:19.042000 audit[4779]: USER_START pid=4779 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:19.044000 audit[4782]: CRED_ACQ pid=4782 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:19.891465 sshd[4779]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:19.891000 audit[4779]: USER_END pid=4779 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:19.891000 audit[4779]: CRED_DISP pid=4779 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:19.902059 systemd[1]: sshd@17-10.0.0.127:22-10.0.0.1:54428.service: Deactivated successfully. Jul 2 07:02:19.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.127:22-10.0.0.1:54428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:19.902765 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 07:02:19.903314 systemd-logind[1274]: Session 18 logged out. Waiting for processes to exit. Jul 2 07:02:19.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.127:22-10.0.0.1:54440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:19.905286 systemd[1]: Started sshd@18-10.0.0.127:22-10.0.0.1:54440.service - OpenSSH per-connection server daemon (10.0.0.1:54440). Jul 2 07:02:19.906095 systemd-logind[1274]: Removed session 18. Jul 2 07:02:19.935000 audit[4793]: USER_ACCT pid=4793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:19.936606 sshd[4793]: Accepted publickey for core from 10.0.0.1 port 54440 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:19.936000 audit[4793]: CRED_ACQ pid=4793 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:19.936000 audit[4793]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc37d8d460 a2=3 a3=7f200911b480 items=0 ppid=1 pid=4793 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:19.936000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:19.937615 sshd[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:19.941077 systemd-logind[1274]: New session 19 of user core. Jul 2 07:02:19.949288 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 07:02:19.952000 audit[4793]: USER_START pid=4793 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:19.953000 audit[4798]: CRED_ACQ pid=4798 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:22.107000 audit[4812]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=4812 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:22.107000 audit[4812]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffde8931c0 a2=0 a3=7fffde8931ac items=0 ppid=2475 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:22.107000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:22.108000 audit[4812]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4812 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:22.108000 audit[4812]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffde8931c0 a2=0 a3=0 items=0 ppid=2475 pid=4812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:22.108000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:22.118000 audit[4814]: NETFILTER_CFG table=filter:115 family=2 entries=32 op=nft_register_rule pid=4814 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:22.118000 audit[4814]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffcef920550 a2=0 a3=7ffcef92053c items=0 ppid=2475 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:22.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:22.120000 audit[4814]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4814 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:22.120000 audit[4814]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcef920550 a2=0 a3=0 items=0 ppid=2475 pid=4814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:22.120000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:22.825933 sshd[4793]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:22.826000 audit[4793]: USER_END pid=4793 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:22.826000 audit[4793]: CRED_DISP pid=4793 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:22.836191 systemd[1]: sshd@18-10.0.0.127:22-10.0.0.1:54440.service: Deactivated successfully. Jul 2 07:02:22.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.127:22-10.0.0.1:54440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:22.836694 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 07:02:22.837113 systemd-logind[1274]: Session 19 logged out. Waiting for processes to exit. Jul 2 07:02:22.838136 systemd[1]: Started sshd@19-10.0.0.127:22-10.0.0.1:33428.service - OpenSSH per-connection server daemon (10.0.0.1:33428). Jul 2 07:02:22.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.127:22-10.0.0.1:33428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:22.838912 systemd-logind[1274]: Removed session 19. Jul 2 07:02:22.867000 audit[4817]: USER_ACCT pid=4817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:22.869170 sshd[4817]: Accepted publickey for core from 10.0.0.1 port 33428 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:22.868000 audit[4817]: CRED_ACQ pid=4817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:22.868000 audit[4817]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd313eb250 a2=3 a3=7fdaf26ff480 items=0 ppid=1 pid=4817 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:22.868000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:22.870158 sshd[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:22.873384 systemd-logind[1274]: New session 20 of user core. Jul 2 07:02:22.882237 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 07:02:22.884000 audit[4817]: USER_START pid=4817 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:22.885000 audit[4819]: CRED_ACQ pid=4819 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.375737 sshd[4817]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:24.376000 audit[4817]: USER_END pid=4817 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.381259 kernel: kauditd_printk_skb: 43 callbacks suppressed Jul 2 07:02:24.381330 kernel: audit: type=1106 audit(1719903744.376:711): pid=4817 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.377000 audit[4817]: CRED_DISP pid=4817 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.384676 kernel: audit: type=1104 audit(1719903744.377:712): pid=4817 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.387832 systemd[1]: sshd@19-10.0.0.127:22-10.0.0.1:33428.service: Deactivated successfully. Jul 2 07:02:24.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.127:22-10.0.0.1:33428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:24.388516 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 07:02:24.389066 systemd-logind[1274]: Session 20 logged out. Waiting for processes to exit. Jul 2 07:02:24.390856 systemd[1]: Started sshd@20-10.0.0.127:22-10.0.0.1:33436.service - OpenSSH per-connection server daemon (10.0.0.1:33436). Jul 2 07:02:24.391151 kernel: audit: type=1131 audit(1719903744.386:713): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.127:22-10.0.0.1:33428 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:24.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.127:22-10.0.0.1:33436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:24.391902 systemd-logind[1274]: Removed session 20. Jul 2 07:02:24.395148 kernel: audit: type=1130 audit(1719903744.389:714): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.127:22-10.0.0.1:33436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:24.417000 audit[4828]: USER_ACCT pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.418407 sshd[4828]: Accepted publickey for core from 10.0.0.1 port 33436 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:24.419552 sshd[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:24.422154 kernel: audit: type=1101 audit(1719903744.417:715): pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.422205 kernel: audit: type=1103 audit(1719903744.417:716): pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.417000 audit[4828]: CRED_ACQ pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.423329 systemd-logind[1274]: New session 21 of user core. Jul 2 07:02:24.431178 kernel: audit: type=1006 audit(1719903744.418:717): pid=4828 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jul 2 07:02:24.431228 kernel: audit: type=1300 audit(1719903744.418:717): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd331b540 a2=3 a3=7f07e40dd480 items=0 ppid=1 pid=4828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:24.418000 audit[4828]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd331b540 a2=3 a3=7f07e40dd480 items=0 ppid=1 pid=4828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:24.418000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:24.435765 kernel: audit: type=1327 audit(1719903744.418:717): proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:24.441276 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 07:02:24.443000 audit[4828]: USER_START pid=4828 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.445000 audit[4830]: CRED_ACQ pid=4830 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.449181 kernel: audit: type=1105 audit(1719903744.443:718): pid=4828 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.724256 sshd[4828]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:24.724000 audit[4828]: USER_END pid=4828 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.724000 audit[4828]: CRED_DISP pid=4828 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:24.726520 systemd[1]: sshd@20-10.0.0.127:22-10.0.0.1:33436.service: Deactivated successfully. Jul 2 07:02:24.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.127:22-10.0.0.1:33436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:24.727308 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 07:02:24.727850 systemd-logind[1274]: Session 21 logged out. Waiting for processes to exit. Jul 2 07:02:24.728660 systemd-logind[1274]: Removed session 21. Jul 2 07:02:29.743560 systemd[1]: Started sshd@21-10.0.0.127:22-10.0.0.1:33438.service - OpenSSH per-connection server daemon (10.0.0.1:33438). Jul 2 07:02:29.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.127:22-10.0.0.1:33438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:29.744472 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 2 07:02:29.744544 kernel: audit: type=1130 audit(1719903749.742:723): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.127:22-10.0.0.1:33438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:29.772000 audit[4848]: USER_ACCT pid=4848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:29.774206 sshd[4848]: Accepted publickey for core from 10.0.0.1 port 33438 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:29.783000 audit[4848]: CRED_ACQ pid=4848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:29.785088 sshd[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:29.787503 kernel: audit: type=1101 audit(1719903749.772:724): pid=4848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:29.787607 kernel: audit: type=1103 audit(1719903749.783:725): pid=4848 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:29.787629 kernel: audit: type=1006 audit(1719903749.783:726): pid=4848 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 2 07:02:29.788823 systemd-logind[1274]: New session 22 of user core. Jul 2 07:02:29.783000 audit[4848]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff983b10f0 a2=3 a3=7f6f69811480 items=0 ppid=1 pid=4848 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:29.839973 kernel: audit: type=1300 audit(1719903749.783:726): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff983b10f0 a2=3 a3=7f6f69811480 items=0 ppid=1 pid=4848 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:29.840020 kernel: audit: type=1327 audit(1719903749.783:726): proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:29.783000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:29.850328 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 07:02:29.852000 audit[4848]: USER_START pid=4848 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:29.854000 audit[4850]: CRED_ACQ pid=4850 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:29.859807 kernel: audit: type=1105 audit(1719903749.852:727): pid=4848 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:29.859855 kernel: audit: type=1103 audit(1719903749.854:728): pid=4850 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:29.960283 sshd[4848]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:29.960000 audit[4848]: USER_END pid=4848 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:29.963608 systemd[1]: sshd@21-10.0.0.127:22-10.0.0.1:33438.service: Deactivated successfully. Jul 2 07:02:29.964529 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 07:02:29.960000 audit[4848]: CRED_DISP pid=4848 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:29.965228 systemd-logind[1274]: Session 22 logged out. Waiting for processes to exit. Jul 2 07:02:29.966162 systemd-logind[1274]: Removed session 22. Jul 2 07:02:29.967729 kernel: audit: type=1106 audit(1719903749.960:729): pid=4848 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:29.967800 kernel: audit: type=1104 audit(1719903749.960:730): pid=4848 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:29.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.127:22-10.0.0.1:33438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:30.864031 kubelet[2313]: E0702 07:02:30.863988 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:02:31.654000 audit[4866]: NETFILTER_CFG table=filter:117 family=2 entries=33 op=nft_register_rule pid=4866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:31.654000 audit[4866]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffc84065c90 a2=0 a3=7ffc84065c7c items=0 ppid=2475 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:31.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:31.658000 audit[4866]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=4866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:31.658000 audit[4866]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc84065c90 a2=0 a3=0 items=0 ppid=2475 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:31.658000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:31.666000 audit[4868]: NETFILTER_CFG table=filter:119 family=2 entries=34 op=nft_register_rule pid=4868 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:31.666000 audit[4868]: SYSCALL arch=c000003e syscall=46 success=yes exit=12604 a0=3 a1=7ffee719d6d0 a2=0 a3=7ffee719d6bc items=0 ppid=2475 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:31.666000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:31.667000 audit[4868]: NETFILTER_CFG table=nat:120 family=2 entries=20 op=nft_register_rule pid=4868 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:31.667000 audit[4868]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffee719d6d0 a2=0 a3=0 items=0 ppid=2475 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:31.667000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:31.688971 kubelet[2313]: I0702 07:02:31.688917 2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rnptn" podStartSLOduration=54.945483328 podStartE2EDuration="1m5.688894654s" podCreationTimestamp="2024-07-02 07:01:26 +0000 UTC" firstStartedPulling="2024-07-02 07:01:55.092475739 +0000 UTC m=+48.307463667" lastFinishedPulling="2024-07-02 07:02:05.835887055 +0000 UTC m=+59.050874993" observedRunningTime="2024-07-02 07:02:07.070711707 +0000 UTC m=+60.285699645" watchObservedRunningTime="2024-07-02 07:02:31.688894654 +0000 UTC m=+84.903882592" Jul 2 07:02:31.689733 kubelet[2313]: I0702 07:02:31.689707 2313 topology_manager.go:215] "Topology Admit Handler" podUID="2b224ebc-c308-40e9-830b-ef309574f0f8" podNamespace="calico-apiserver" podName="calico-apiserver-68f6fdf845-q85k6" Jul 2 07:02:31.695303 systemd[1]: Created slice kubepods-besteffort-pod2b224ebc_c308_40e9_830b_ef309574f0f8.slice - libcontainer container kubepods-besteffort-pod2b224ebc_c308_40e9_830b_ef309574f0f8.slice. Jul 2 07:02:31.865454 kubelet[2313]: I0702 07:02:31.865419 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b224ebc-c308-40e9-830b-ef309574f0f8-calico-apiserver-certs\") pod \"calico-apiserver-68f6fdf845-q85k6\" (UID: \"2b224ebc-c308-40e9-830b-ef309574f0f8\") " pod="calico-apiserver/calico-apiserver-68f6fdf845-q85k6" Jul 2 07:02:31.865847 kubelet[2313]: I0702 07:02:31.865827 2313 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfc5j\" (UniqueName: \"kubernetes.io/projected/2b224ebc-c308-40e9-830b-ef309574f0f8-kube-api-access-bfc5j\") pod \"calico-apiserver-68f6fdf845-q85k6\" (UID: \"2b224ebc-c308-40e9-830b-ef309574f0f8\") " pod="calico-apiserver/calico-apiserver-68f6fdf845-q85k6" Jul 2 07:02:31.999041 containerd[1289]: time="2024-07-02T07:02:31.998982850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f6fdf845-q85k6,Uid:2b224ebc-c308-40e9-830b-ef309574f0f8,Namespace:calico-apiserver,Attempt:0,}" Jul 2 07:02:32.126417 systemd-networkd[1112]: cali9d8d2570c8c: Link UP Jul 2 07:02:32.128644 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:02:32.128710 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9d8d2570c8c: link becomes ready Jul 2 07:02:32.129674 systemd-networkd[1112]: cali9d8d2570c8c: Gained carrier Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.055 [INFO][4877] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68f6fdf845--q85k6-eth0 calico-apiserver-68f6fdf845- calico-apiserver 2b224ebc-c308-40e9-830b-ef309574f0f8 1092 0 2024-07-02 07:02:31 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68f6fdf845 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68f6fdf845-q85k6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9d8d2570c8c [] []}} ContainerID="ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" Namespace="calico-apiserver" Pod="calico-apiserver-68f6fdf845-q85k6" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f6fdf845--q85k6-" Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.056 [INFO][4877] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" Namespace="calico-apiserver" Pod="calico-apiserver-68f6fdf845-q85k6" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f6fdf845--q85k6-eth0" Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.081 [INFO][4887] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" HandleID="k8s-pod-network.ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" Workload="localhost-k8s-calico--apiserver--68f6fdf845--q85k6-eth0" Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.089 [INFO][4887] ipam_plugin.go 264: Auto assigning IP ContainerID="ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" HandleID="k8s-pod-network.ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" Workload="localhost-k8s-calico--apiserver--68f6fdf845--q85k6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375e00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-68f6fdf845-q85k6", "timestamp":"2024-07-02 07:02:32.081494679 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.089 [INFO][4887] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.089 [INFO][4887] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.089 [INFO][4887] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.091 [INFO][4887] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" host="localhost" Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.096 [INFO][4887] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.102 [INFO][4887] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.104 [INFO][4887] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.109 [INFO][4887] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.109 [INFO][4887] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" host="localhost" Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.111 [INFO][4887] ipam.go 1685: Creating new handle: k8s-pod-network.ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.115 [INFO][4887] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" host="localhost" Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.121 [INFO][4887] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" host="localhost" Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.121 [INFO][4887] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" host="localhost" Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.121 [INFO][4887] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:02:32.141064 containerd[1289]: 2024-07-02 07:02:32.122 [INFO][4887] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" HandleID="k8s-pod-network.ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" Workload="localhost-k8s-calico--apiserver--68f6fdf845--q85k6-eth0" Jul 2 07:02:32.141853 containerd[1289]: 2024-07-02 07:02:32.123 [INFO][4877] k8s.go 386: Populated endpoint ContainerID="ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" Namespace="calico-apiserver" Pod="calico-apiserver-68f6fdf845-q85k6" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f6fdf845--q85k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68f6fdf845--q85k6-eth0", GenerateName:"calico-apiserver-68f6fdf845-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b224ebc-c308-40e9-830b-ef309574f0f8", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 2, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f6fdf845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68f6fdf845-q85k6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d8d2570c8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:02:32.141853 containerd[1289]: 2024-07-02 07:02:32.124 [INFO][4877] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" Namespace="calico-apiserver" Pod="calico-apiserver-68f6fdf845-q85k6" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f6fdf845--q85k6-eth0" Jul 2 07:02:32.141853 containerd[1289]: 2024-07-02 07:02:32.124 [INFO][4877] dataplane_linux.go 68: Setting the host side veth name to cali9d8d2570c8c ContainerID="ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" Namespace="calico-apiserver" Pod="calico-apiserver-68f6fdf845-q85k6" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f6fdf845--q85k6-eth0" Jul 2 07:02:32.141853 containerd[1289]: 2024-07-02 07:02:32.129 [INFO][4877] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" Namespace="calico-apiserver" Pod="calico-apiserver-68f6fdf845-q85k6" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f6fdf845--q85k6-eth0" Jul 2 07:02:32.141853 containerd[1289]: 2024-07-02 07:02:32.130 [INFO][4877] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" Namespace="calico-apiserver" Pod="calico-apiserver-68f6fdf845-q85k6" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f6fdf845--q85k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68f6fdf845--q85k6-eth0", GenerateName:"calico-apiserver-68f6fdf845-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b224ebc-c308-40e9-830b-ef309574f0f8", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 2, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68f6fdf845", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b", Pod:"calico-apiserver-68f6fdf845-q85k6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9d8d2570c8c", MAC:"1e:bd:c3:92:69:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:02:32.141853 containerd[1289]: 2024-07-02 07:02:32.138 [INFO][4877] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b" Namespace="calico-apiserver" Pod="calico-apiserver-68f6fdf845-q85k6" WorkloadEndpoint="localhost-k8s-calico--apiserver--68f6fdf845--q85k6-eth0" Jul 2 07:02:32.167545 containerd[1289]: time="2024-07-02T07:02:32.167433189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:02:32.167722 containerd[1289]: time="2024-07-02T07:02:32.167556535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:02:32.167722 containerd[1289]: time="2024-07-02T07:02:32.167632700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:02:32.167722 containerd[1289]: time="2024-07-02T07:02:32.167663478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:02:32.167000 audit[4926]: NETFILTER_CFG table=filter:121 family=2 entries=55 op=nft_register_chain pid=4926 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:02:32.167000 audit[4926]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7fff1d9210e0 a2=0 a3=7fff1d9210cc items=0 ppid=3442 pid=4926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:32.167000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:02:32.195322 systemd[1]: Started cri-containerd-ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b.scope - libcontainer container ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b. Jul 2 07:02:32.205000 audit: BPF prog-id=173 op=LOAD Jul 2 07:02:32.205000 audit: BPF prog-id=174 op=LOAD Jul 2 07:02:32.205000 audit[4927]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133988 a2=78 a3=0 items=0 ppid=4917 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:32.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566343639656237633838636430663139326163326531303064653365 Jul 2 07:02:32.205000 audit: BPF prog-id=175 op=LOAD Jul 2 07:02:32.205000 audit[4927]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000133720 a2=78 a3=0 items=0 ppid=4917 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:32.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566343639656237633838636430663139326163326531303064653365 Jul 2 07:02:32.205000 audit: BPF prog-id=175 op=UNLOAD Jul 2 07:02:32.205000 audit: BPF prog-id=174 op=UNLOAD Jul 2 07:02:32.205000 audit: BPF prog-id=176 op=LOAD Jul 2 07:02:32.205000 audit[4927]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000133be0 a2=78 a3=0 items=0 ppid=4917 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:32.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566343639656237633838636430663139326163326531303064653365 Jul 2 07:02:32.207673 systemd-resolved[1227]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 07:02:32.235611 containerd[1289]: time="2024-07-02T07:02:32.235560592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68f6fdf845-q85k6,Uid:2b224ebc-c308-40e9-830b-ef309574f0f8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b\"" Jul 2 07:02:32.237273 containerd[1289]: time="2024-07-02T07:02:32.237239618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 07:02:33.236372 systemd-networkd[1112]: cali9d8d2570c8c: Gained IPv6LL Jul 2 07:02:34.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.127:22-10.0.0.1:54202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:34.977331 systemd[1]: Started sshd@22-10.0.0.127:22-10.0.0.1:54202.service - OpenSSH per-connection server daemon (10.0.0.1:54202). Jul 2 07:02:35.002465 kernel: kauditd_printk_skb: 28 callbacks suppressed Jul 2 07:02:35.002528 kernel: audit: type=1130 audit(1719903754.976:743): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.127:22-10.0.0.1:54202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:35.045000 audit[4973]: USER_ACCT pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:35.049265 sshd[4973]: Accepted publickey for core from 10.0.0.1 port 54202 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:35.047768 sshd[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:35.046000 audit[4973]: CRED_ACQ pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:35.052212 systemd-logind[1274]: New session 23 of user core. Jul 2 07:02:35.055543 kernel: audit: type=1101 audit(1719903755.045:744): pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:35.055596 kernel: audit: type=1103 audit(1719903755.046:745): pid=4973 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:35.055617 kernel: audit: type=1006 audit(1719903755.046:746): pid=4973 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 2 07:02:35.046000 audit[4973]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea64aca00 a2=3 a3=7ff91ec0d480 items=0 ppid=1 pid=4973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:35.060487 kernel: audit: type=1300 audit(1719903755.046:746): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea64aca00 a2=3 a3=7ff91ec0d480 items=0 ppid=1 pid=4973 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:35.060542 kernel: audit: type=1327 audit(1719903755.046:746): proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:35.046000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:35.063280 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 07:02:35.066000 audit[4973]: USER_START pid=4973 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:35.067000 audit[4975]: CRED_ACQ pid=4975 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:35.087992 kernel: audit: type=1105 audit(1719903755.066:747): pid=4973 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:35.088039 kernel: audit: type=1103 audit(1719903755.067:748): pid=4975 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:35.310579 sshd[4973]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:35.310000 audit[4973]: USER_END pid=4973 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:35.312762 systemd[1]: sshd@22-10.0.0.127:22-10.0.0.1:54202.service: Deactivated successfully. Jul 2 07:02:35.313446 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 07:02:35.313926 systemd-logind[1274]: Session 23 logged out. Waiting for processes to exit. Jul 2 07:02:35.314578 systemd-logind[1274]: Removed session 23. Jul 2 07:02:35.310000 audit[4973]: CRED_DISP pid=4973 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:35.333368 kernel: audit: type=1106 audit(1719903755.310:749): pid=4973 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:35.333445 kernel: audit: type=1104 audit(1719903755.310:750): pid=4973 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:35.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.127:22-10.0.0.1:54202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:35.863560 kubelet[2313]: E0702 07:02:35.863486 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:02:36.092000 audit[4990]: NETFILTER_CFG table=filter:122 family=2 entries=22 op=nft_register_rule pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:36.092000 audit[4990]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc2547c410 a2=0 a3=7ffc2547c3fc items=0 ppid=2475 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:36.092000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:36.095000 audit[4990]: NETFILTER_CFG table=nat:123 family=2 entries=104 op=nft_register_chain pid=4990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:36.095000 audit[4990]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc2547c410 a2=0 a3=7ffc2547c3fc items=0 ppid=2475 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:36.095000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:36.393567 containerd[1289]: time="2024-07-02T07:02:36.393488285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:02:36.394221 containerd[1289]: time="2024-07-02T07:02:36.394175922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 07:02:36.395526 containerd[1289]: time="2024-07-02T07:02:36.395462548Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:02:36.397097 containerd[1289]: time="2024-07-02T07:02:36.397061928Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:02:36.398798 containerd[1289]: time="2024-07-02T07:02:36.398743566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:02:36.399363 containerd[1289]: time="2024-07-02T07:02:36.399323929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.162048383s" Jul 2 07:02:36.399363 containerd[1289]: time="2024-07-02T07:02:36.399358405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 07:02:36.401442 containerd[1289]: time="2024-07-02T07:02:36.401401800Z" level=info msg="CreateContainer within sandbox \"ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 07:02:36.435548 containerd[1289]: time="2024-07-02T07:02:36.435457659Z" level=info msg="CreateContainer within sandbox \"ef469eb7c88cd0f192ac2e100de3e0e1be92b9b9ab750992281807125d35be3b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6e39a999e529b6ec400c19530f100f2a9f3cedbe822e35ab3ab60fd33bc8a649\"" Jul 2 07:02:36.436205 containerd[1289]: time="2024-07-02T07:02:36.436158411Z" level=info msg="StartContainer for \"6e39a999e529b6ec400c19530f100f2a9f3cedbe822e35ab3ab60fd33bc8a649\"" Jul 2 07:02:36.505402 systemd[1]: Started cri-containerd-6e39a999e529b6ec400c19530f100f2a9f3cedbe822e35ab3ab60fd33bc8a649.scope - libcontainer container 6e39a999e529b6ec400c19530f100f2a9f3cedbe822e35ab3ab60fd33bc8a649. Jul 2 07:02:36.519000 audit: BPF prog-id=177 op=LOAD Jul 2 07:02:36.521000 audit: BPF prog-id=178 op=LOAD Jul 2 07:02:36.521000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4917 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:36.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665333961393939653532396236656334303063313935333066313030 Jul 2 07:02:36.521000 audit: BPF prog-id=179 op=LOAD Jul 2 07:02:36.521000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4917 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:36.521000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665333961393939653532396236656334303063313935333066313030 Jul 2 07:02:36.522000 audit: BPF prog-id=179 op=UNLOAD Jul 2 07:02:36.522000 audit: BPF prog-id=178 op=UNLOAD Jul 2 07:02:36.522000 audit: BPF prog-id=180 op=LOAD Jul 2 07:02:36.522000 audit[5003]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4917 pid=5003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:36.522000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665333961393939653532396236656334303063313935333066313030 Jul 2 07:02:36.551329 containerd[1289]: time="2024-07-02T07:02:36.551275467Z" level=info msg="StartContainer for \"6e39a999e529b6ec400c19530f100f2a9f3cedbe822e35ab3ab60fd33bc8a649\" returns successfully" Jul 2 07:02:37.139201 kubelet[2313]: I0702 07:02:37.139114 2313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-68f6fdf845-q85k6" podStartSLOduration=1.975883504 podStartE2EDuration="6.139099197s" podCreationTimestamp="2024-07-02 07:02:31 +0000 UTC" firstStartedPulling="2024-07-02 07:02:32.236879673 +0000 UTC m=+85.451867611" lastFinishedPulling="2024-07-02 07:02:36.400095366 +0000 UTC m=+89.615083304" observedRunningTime="2024-07-02 07:02:37.138549953 +0000 UTC m=+90.353537891" watchObservedRunningTime="2024-07-02 07:02:37.139099197 +0000 UTC m=+90.354087135" Jul 2 07:02:37.153000 audit[5034]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=5034 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:37.153000 audit[5034]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffedce17d60 a2=0 a3=7ffedce17d4c items=0 ppid=2475 pid=5034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:37.153000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:37.155000 audit[5034]: NETFILTER_CFG table=nat:125 family=2 entries=44 op=nft_register_rule pid=5034 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:37.155000 audit[5034]: SYSCALL arch=c000003e syscall=46 success=yes exit=14988 a0=3 a1=7ffedce17d60 a2=0 a3=7ffedce17d4c items=0 ppid=2475 pid=5034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:37.155000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:37.165000 audit[5036]: NETFILTER_CFG table=filter:126 family=2 entries=9 op=nft_register_rule pid=5036 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:37.165000 audit[5036]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe9f289250 a2=0 a3=7ffe9f28923c items=0 ppid=2475 pid=5036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:37.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:37.166000 audit[5036]: NETFILTER_CFG table=nat:127 family=2 entries=51 op=nft_register_chain pid=5036 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:37.166000 audit[5036]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7ffe9f289250 a2=0 a3=7ffe9f28923c items=0 ppid=2475 pid=5036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:37.166000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:38.729250 systemd[1]: run-containerd-runc-k8s.io-96b944ef7f286c62f61b1be247e25641ac07aa349929db915f4d523731e4e50a-runc.qpN7QT.mount: Deactivated successfully. Jul 2 07:02:38.863947 kubelet[2313]: E0702 07:02:38.863915 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:02:40.322684 systemd[1]: Started sshd@23-10.0.0.127:22-10.0.0.1:54214.service - OpenSSH per-connection server daemon (10.0.0.1:54214). Jul 2 07:02:40.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.127:22-10.0.0.1:54214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:40.323798 kernel: kauditd_printk_skb: 31 callbacks suppressed Jul 2 07:02:40.323854 kernel: audit: type=1130 audit(1719903760.322:764): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.127:22-10.0.0.1:54214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:40.361000 audit[5065]: USER_ACCT pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:40.362067 sshd[5065]: Accepted publickey for core from 10.0.0.1 port 54214 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:40.363568 sshd[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:40.362000 audit[5065]: CRED_ACQ pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:40.368251 kernel: audit: type=1101 audit(1719903760.361:765): pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:40.368305 kernel: audit: type=1103 audit(1719903760.362:766): pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:40.368325 kernel: audit: type=1006 audit(1719903760.363:767): pid=5065 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 2 07:02:40.367887 systemd-logind[1274]: New session 24 of user core. Jul 2 07:02:40.363000 audit[5065]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd59b79d10 a2=3 a3=7f9ecab74480 items=0 ppid=1 pid=5065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:40.407881 kernel: audit: type=1300 audit(1719903760.363:767): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd59b79d10 a2=3 a3=7f9ecab74480 items=0 ppid=1 pid=5065 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:40.408061 kernel: audit: type=1327 audit(1719903760.363:767): proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:40.363000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:40.414377 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 07:02:40.419000 audit[5065]: USER_START pid=5065 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:40.421000 audit[5067]: CRED_ACQ pid=5067 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:40.433691 kernel: audit: type=1105 audit(1719903760.419:768): pid=5065 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:40.433763 kernel: audit: type=1103 audit(1719903760.421:769): pid=5067 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:40.554084 sshd[5065]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:40.555000 audit[5065]: USER_END pid=5065 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:40.556723 systemd[1]: sshd@23-10.0.0.127:22-10.0.0.1:54214.service: Deactivated successfully. Jul 2 07:02:40.557611 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 07:02:40.558344 systemd-logind[1274]: Session 24 logged out. Waiting for processes to exit. Jul 2 07:02:40.559245 systemd-logind[1274]: Removed session 24. Jul 2 07:02:40.555000 audit[5065]: CRED_DISP pid=5065 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:40.562667 kernel: audit: type=1106 audit(1719903760.555:770): pid=5065 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:40.562731 kernel: audit: type=1104 audit(1719903760.555:771): pid=5065 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:40.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.127:22-10.0.0.1:54214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:41.108000 audit[5079]: NETFILTER_CFG table=filter:128 family=2 entries=8 op=nft_register_rule pid=5079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:41.108000 audit[5079]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc2ad3d5f0 a2=0 a3=7ffc2ad3d5dc items=0 ppid=2475 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:41.108000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:41.110000 audit[5079]: NETFILTER_CFG table=nat:129 family=2 entries=54 op=nft_register_rule pid=5079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:02:41.110000 audit[5079]: SYSCALL arch=c000003e syscall=46 success=yes exit=18564 a0=3 a1=7ffc2ad3d5f0 a2=0 a3=7ffc2ad3d5dc items=0 ppid=2475 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:41.110000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:02:41.863313 kubelet[2313]: E0702 07:02:41.863262 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 07:02:45.575793 systemd[1]: Started sshd@24-10.0.0.127:22-10.0.0.1:39014.service - OpenSSH per-connection server daemon (10.0.0.1:39014). Jul 2 07:02:45.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.127:22-10.0.0.1:39014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:45.577162 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 2 07:02:45.577223 kernel: audit: type=1130 audit(1719903765.574:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.127:22-10.0.0.1:39014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:45.605000 audit[5083]: USER_ACCT pid=5083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:45.606837 sshd[5083]: Accepted publickey for core from 10.0.0.1 port 39014 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:45.607870 sshd[5083]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:45.605000 audit[5083]: CRED_ACQ pid=5083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:45.612003 systemd-logind[1274]: New session 25 of user core. Jul 2 07:02:45.614539 kernel: audit: type=1101 audit(1719903765.605:776): pid=5083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:45.614607 kernel: audit: type=1103 audit(1719903765.605:777): pid=5083 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:45.614638 kernel: audit: type=1006 audit(1719903765.605:778): pid=5083 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jul 2 07:02:45.605000 audit[5083]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde064c690 a2=3 a3=7fd12680c480 items=0 ppid=1 pid=5083 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:45.621212 kernel: audit: type=1300 audit(1719903765.605:778): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffde064c690 a2=3 a3=7fd12680c480 items=0 ppid=1 pid=5083 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:45.621245 kernel: audit: type=1327 audit(1719903765.605:778): proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:45.605000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:45.628417 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 07:02:45.631000 audit[5083]: USER_START pid=5083 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:45.632000 audit[5085]: CRED_ACQ pid=5085 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:45.638489 kernel: audit: type=1105 audit(1719903765.631:779): pid=5083 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:45.638527 kernel: audit: type=1103 audit(1719903765.632:780): pid=5085 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:45.786974 sshd[5083]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:45.786000 audit[5083]: USER_END pid=5083 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:45.789897 systemd[1]: sshd@24-10.0.0.127:22-10.0.0.1:39014.service: Deactivated successfully. Jul 2 07:02:45.790807 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 07:02:45.791336 systemd-logind[1274]: Session 25 logged out. Waiting for processes to exit. Jul 2 07:02:45.791989 systemd-logind[1274]: Removed session 25. Jul 2 07:02:45.787000 audit[5083]: CRED_DISP pid=5083 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:45.799586 kernel: audit: type=1106 audit(1719903765.786:781): pid=5083 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:45.799659 kernel: audit: type=1104 audit(1719903765.787:782): pid=5083 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:45.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.127:22-10.0.0.1:39014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:50.799274 systemd[1]: Started sshd@25-10.0.0.127:22-10.0.0.1:39018.service - OpenSSH per-connection server daemon (10.0.0.1:39018). Jul 2 07:02:50.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.127:22-10.0.0.1:39018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:50.800568 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:02:50.800683 kernel: audit: type=1130 audit(1719903770.798:784): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.127:22-10.0.0.1:39018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:50.827000 audit[5101]: USER_ACCT pid=5101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:50.828565 sshd[5101]: Accepted publickey for core from 10.0.0.1 port 39018 ssh2: RSA SHA256:KAF3b1zlKL72W7Y/OlvTz0Y8q6kacN7exFNMepNRBwQ Jul 2 07:02:50.833144 kernel: audit: type=1101 audit(1719903770.827:785): pid=5101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:50.834351 sshd[5101]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:02:50.832000 audit[5101]: CRED_ACQ pid=5101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:50.839156 kernel: audit: type=1103 audit(1719903770.832:786): pid=5101 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:50.832000 audit[5101]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9aae63d0 a2=3 a3=7fa7333d8480 items=0 ppid=1 pid=5101 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:50.844535 systemd-logind[1274]: New session 26 of user core. Jul 2 07:02:50.846983 kernel: audit: type=1006 audit(1719903770.832:787): pid=5101 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jul 2 07:02:50.847670 kernel: audit: type=1300 audit(1719903770.832:787): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe9aae63d0 a2=3 a3=7fa7333d8480 items=0 ppid=1 pid=5101 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:02:50.847696 kernel: audit: type=1327 audit(1719903770.832:787): proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:50.832000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:02:50.860298 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 07:02:50.863000 audit[5101]: USER_START pid=5101 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:50.865000 audit[5103]: CRED_ACQ pid=5103 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:50.870854 kernel: audit: type=1105 audit(1719903770.863:788): pid=5101 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:50.870900 kernel: audit: type=1103 audit(1719903770.865:789): pid=5103 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:50.966485 sshd[5101]: pam_unix(sshd:session): session closed for user core Jul 2 07:02:50.966000 audit[5101]: USER_END pid=5101 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:50.969264 systemd[1]: sshd@25-10.0.0.127:22-10.0.0.1:39018.service: Deactivated successfully. Jul 2 07:02:50.970002 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 07:02:50.970514 systemd-logind[1274]: Session 26 logged out. Waiting for processes to exit. Jul 2 07:02:50.966000 audit[5101]: CRED_DISP pid=5101 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:50.971291 systemd-logind[1274]: Removed session 26. Jul 2 07:02:50.973766 kernel: audit: type=1106 audit(1719903770.966:790): pid=5101 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:50.973838 kernel: audit: type=1104 audit(1719903770.966:791): pid=5101 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 07:02:50.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.127:22-10.0.0.1:39018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'