Jul 2 07:04:15.024148 kernel: Linux version 6.1.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 23:29:55 -00 2024 Jul 2 07:04:15.024182 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 07:04:15.024197 kernel: BIOS-provided physical RAM map: Jul 2 07:04:15.024208 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 07:04:15.024219 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 07:04:15.024229 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 07:04:15.024246 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jul 2 07:04:15.024257 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jul 2 07:04:15.024268 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jul 2 07:04:15.024279 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 07:04:15.024291 kernel: NX (Execute Disable) protection: active Jul 2 07:04:15.024302 kernel: SMBIOS 2.7 present. Jul 2 07:04:15.024313 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 2 07:04:15.024324 kernel: Hypervisor detected: KVM Jul 2 07:04:15.024341 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:04:15.024354 kernel: kvm-clock: using sched offset of 7437344128 cycles Jul 2 07:04:15.024367 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:04:15.024380 kernel: tsc: Detected 2500.004 MHz processor Jul 2 07:04:15.024393 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:04:15.024406 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:04:15.024418 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jul 2 07:04:15.024433 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:04:15.024445 kernel: Using GB pages for direct mapping Jul 2 07:04:15.024458 kernel: ACPI: Early table checksum verification disabled Jul 2 07:04:15.024471 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jul 2 07:04:15.024483 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jul 2 07:04:15.024496 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 07:04:15.024509 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 2 07:04:15.024521 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jul 2 07:04:15.024536 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 07:04:15.024548 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 07:04:15.024561 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 2 07:04:15.024573 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 07:04:15.024586 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 2 07:04:15.024598 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 2 07:04:15.024611 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 07:04:15.024624 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jul 2 07:04:15.024636 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jul 2 07:04:15.024651 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jul 2 07:04:15.024664 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jul 2 07:04:15.024681 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jul 2 07:04:15.028864 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jul 2 07:04:15.028884 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jul 2 07:04:15.028903 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jul 2 07:04:15.028917 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jul 2 07:04:15.028930 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jul 2 07:04:15.028944 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 07:04:15.028957 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 07:04:15.028970 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 2 07:04:15.028984 kernel: NUMA: Initialized distance table, cnt=1 Jul 2 07:04:15.028997 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jul 2 07:04:15.029010 kernel: Zone ranges: Jul 2 07:04:15.029024 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:04:15.029040 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jul 2 07:04:15.029053 kernel: Normal empty Jul 2 07:04:15.029066 kernel: Movable zone start for each node Jul 2 07:04:15.029079 kernel: Early memory node ranges Jul 2 07:04:15.029093 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 07:04:15.029106 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jul 2 07:04:15.029120 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jul 2 07:04:15.029133 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:04:15.029146 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 07:04:15.029162 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jul 2 07:04:15.029175 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 07:04:15.029236 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:04:15.029253 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 2 07:04:15.029267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:04:15.029281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:04:15.029294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:04:15.029307 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:04:15.029321 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:04:15.029338 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 07:04:15.029351 kernel: TSC deadline timer available Jul 2 07:04:15.029365 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 07:04:15.029377 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jul 2 07:04:15.029391 kernel: Booting paravirtualized kernel on KVM Jul 2 07:04:15.029405 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:04:15.029418 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 07:04:15.029431 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576 Jul 2 07:04:15.029445 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152 Jul 2 07:04:15.029460 kernel: pcpu-alloc: [0] 0 1 Jul 2 07:04:15.029473 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:04:15.029487 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:04:15.029500 kernel: Fallback order for Node 0: 0 Jul 2 07:04:15.029513 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jul 2 07:04:15.029526 kernel: Policy zone: DMA32 Jul 2 07:04:15.029542 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 07:04:15.029557 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:04:15.029572 kernel: random: crng init done Jul 2 07:04:15.029585 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 07:04:15.029598 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 07:04:15.029612 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:04:15.029626 kernel: Memory: 1928268K/2057760K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 129232K reserved, 0K cma-reserved) Jul 2 07:04:15.029639 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 07:04:15.029651 kernel: Kernel/User page tables isolation: enabled Jul 2 07:04:15.029665 kernel: ftrace: allocating 36081 entries in 141 pages Jul 2 07:04:15.029678 kernel: ftrace: allocated 141 pages with 4 groups Jul 2 07:04:15.029705 kernel: Dynamic Preempt: voluntary Jul 2 07:04:15.029718 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 07:04:15.029732 kernel: rcu: RCU event tracing is enabled. Jul 2 07:04:15.029746 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 07:04:15.029806 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 07:04:15.029819 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:04:15.029845 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:04:15.029858 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:04:15.029870 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 07:04:15.029885 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 07:04:15.029899 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 07:04:15.029913 kernel: Console: colour VGA+ 80x25 Jul 2 07:04:15.029925 kernel: printk: console [ttyS0] enabled Jul 2 07:04:15.029940 kernel: ACPI: Core revision 20220331 Jul 2 07:04:15.029952 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 2 07:04:15.029964 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:04:15.029976 kernel: x2apic enabled Jul 2 07:04:15.029990 kernel: Switched APIC routing to physical x2apic. Jul 2 07:04:15.030004 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jul 2 07:04:15.030021 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Jul 2 07:04:15.030036 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 07:04:15.030061 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 07:04:15.030078 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:04:15.030093 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 07:04:15.030107 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:04:15.030122 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:04:15.030137 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 07:04:15.030151 kernel: RETBleed: Vulnerable Jul 2 07:04:15.030166 kernel: Speculative Store Bypass: Vulnerable Jul 2 07:04:15.030181 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:04:15.030194 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:04:15.030209 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 07:04:15.030227 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:04:15.030241 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:04:15.030256 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:04:15.030271 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 2 07:04:15.030284 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 2 07:04:15.030353 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 07:04:15.030370 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 07:04:15.030395 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 07:04:15.030409 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 2 07:04:15.030423 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:04:15.030437 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 2 07:04:15.030451 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 2 07:04:15.030465 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 2 07:04:15.030607 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 2 07:04:15.030624 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 2 07:04:15.030638 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 2 07:04:15.030653 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 2 07:04:15.030671 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:04:15.030685 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:04:15.030711 kernel: LSM: Security Framework initializing Jul 2 07:04:15.030726 kernel: SELinux: Initializing. Jul 2 07:04:15.030740 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 07:04:15.030753 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 07:04:15.030805 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 07:04:15.031043 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 07:04:15.031091 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jul 2 07:04:15.031107 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 07:04:15.031122 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jul 2 07:04:15.031142 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 07:04:15.031181 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jul 2 07:04:15.031195 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 07:04:15.031210 kernel: signal: max sigframe size: 3632 Jul 2 07:04:15.031224 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:04:15.031265 kernel: rcu: Max phase no-delay instances is 400. Jul 2 07:04:15.031279 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 07:04:15.031293 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:04:15.031308 kernel: x86: Booting SMP configuration: Jul 2 07:04:15.031347 kernel: .... node #0, CPUs: #1 Jul 2 07:04:15.031362 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 2 07:04:15.031378 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 07:04:15.031484 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 07:04:15.032300 kernel: smpboot: Max logical packages: 1 Jul 2 07:04:15.032318 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Jul 2 07:04:15.032363 kernel: devtmpfs: initialized Jul 2 07:04:15.032378 kernel: x86/mm: Memory block size: 128MB Jul 2 07:04:15.032392 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:04:15.032412 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 07:04:15.032456 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:04:15.032470 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:04:15.032485 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:04:15.032500 kernel: audit: type=2000 audit(1719903854.699:1): state=initialized audit_enabled=0 res=1 Jul 2 07:04:15.032543 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:04:15.032558 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:04:15.032572 kernel: cpuidle: using governor menu Jul 2 07:04:15.032586 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:04:15.032632 kernel: dca service started, version 1.12.1 Jul 2 07:04:15.032647 kernel: PCI: Using configuration type 1 for base access Jul 2 07:04:15.032660 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:04:15.032674 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:04:15.036730 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 07:04:15.036762 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:04:15.036779 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 07:04:15.036796 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:04:15.036811 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:04:15.036832 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:04:15.036847 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:04:15.036862 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 2 07:04:15.036877 kernel: ACPI: Interpreter enabled Jul 2 07:04:15.036892 kernel: ACPI: PM: (supports S0 S5) Jul 2 07:04:15.036908 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:04:15.036923 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:04:15.036938 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 07:04:15.036953 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 2 07:04:15.036972 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:04:15.037285 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:04:15.037540 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 07:04:15.037863 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jul 2 07:04:15.037887 kernel: acpiphp: Slot [3] registered Jul 2 07:04:15.037903 kernel: acpiphp: Slot [4] registered Jul 2 07:04:15.037919 kernel: acpiphp: Slot [5] registered Jul 2 07:04:15.037939 kernel: acpiphp: Slot [6] registered Jul 2 07:04:15.037954 kernel: acpiphp: Slot [7] registered Jul 2 07:04:15.037969 kernel: acpiphp: Slot [8] registered Jul 2 07:04:15.037984 kernel: acpiphp: Slot [9] registered Jul 2 07:04:15.038000 kernel: acpiphp: Slot [10] registered Jul 2 07:04:15.038015 kernel: acpiphp: Slot [11] registered Jul 2 07:04:15.038030 kernel: acpiphp: Slot [12] registered Jul 2 07:04:15.038045 kernel: acpiphp: Slot [13] registered Jul 2 07:04:15.038059 kernel: acpiphp: Slot [14] registered Jul 2 07:04:15.038074 kernel: acpiphp: Slot [15] registered Jul 2 07:04:15.038092 kernel: acpiphp: Slot [16] registered Jul 2 07:04:15.038107 kernel: acpiphp: Slot [17] registered Jul 2 07:04:15.038121 kernel: acpiphp: Slot [18] registered Jul 2 07:04:15.038136 kernel: acpiphp: Slot [19] registered Jul 2 07:04:15.038150 kernel: acpiphp: Slot [20] registered Jul 2 07:04:15.038165 kernel: acpiphp: Slot [21] registered Jul 2 07:04:15.038180 kernel: acpiphp: Slot [22] registered Jul 2 07:04:15.038195 kernel: acpiphp: Slot [23] registered Jul 2 07:04:15.038210 kernel: acpiphp: Slot [24] registered Jul 2 07:04:15.038227 kernel: acpiphp: Slot [25] registered Jul 2 07:04:15.038242 kernel: acpiphp: Slot [26] registered Jul 2 07:04:15.038255 kernel: acpiphp: Slot [27] registered Jul 2 07:04:15.038270 kernel: acpiphp: Slot [28] registered Jul 2 07:04:15.038285 kernel: acpiphp: Slot [29] registered Jul 2 07:04:15.038353 kernel: acpiphp: Slot [30] registered Jul 2 07:04:15.038370 kernel: acpiphp: Slot [31] registered Jul 2 07:04:15.038390 kernel: PCI host bridge to bus 0000:00 Jul 2 07:04:15.038532 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:04:15.038652 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:04:15.038781 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:04:15.038893 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 07:04:15.039004 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:04:15.039145 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:04:15.039281 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 07:04:15.039469 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jul 2 07:04:15.039608 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 07:04:15.039749 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 07:04:15.039876 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 2 07:04:15.040089 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 2 07:04:15.040218 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 2 07:04:15.040345 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 2 07:04:15.040474 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 2 07:04:15.040599 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 2 07:04:15.044912 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jul 2 07:04:15.045060 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jul 2 07:04:15.045190 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 2 07:04:15.045319 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 07:04:15.045454 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 07:04:15.045814 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jul 2 07:04:15.045965 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 07:04:15.046095 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jul 2 07:04:15.046116 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:04:15.046132 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:04:15.046148 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:04:15.046163 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:04:15.046178 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:04:15.046198 kernel: iommu: Default domain type: Translated Jul 2 07:04:15.046213 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:04:15.046229 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:04:15.046244 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:04:15.046260 kernel: PTP clock support registered Jul 2 07:04:15.046275 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:04:15.046290 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:04:15.046305 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 07:04:15.046319 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jul 2 07:04:15.046460 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 2 07:04:15.046588 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 2 07:04:15.049173 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 07:04:15.049231 kernel: vgaarb: loaded Jul 2 07:04:15.049248 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 2 07:04:15.049264 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 2 07:04:15.049280 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:04:15.049723 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:04:15.049749 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:04:15.049765 kernel: pnp: PnP ACPI init Jul 2 07:04:15.049780 kernel: pnp: PnP ACPI: found 5 devices Jul 2 07:04:15.049795 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:04:15.049810 kernel: NET: Registered PF_INET protocol family Jul 2 07:04:15.049826 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 07:04:15.049841 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 07:04:15.049857 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:04:15.049872 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:04:15.049890 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 07:04:15.049905 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 07:04:15.049921 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 07:04:15.049936 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 07:04:15.049950 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:04:15.050014 kernel: NET: Registered PF_XDP protocol family Jul 2 07:04:15.050406 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:04:15.050666 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:04:15.050808 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:04:15.051217 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 07:04:15.051492 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:04:15.051517 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:04:15.051534 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 07:04:15.051551 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Jul 2 07:04:15.051567 kernel: clocksource: Switched to clocksource tsc Jul 2 07:04:15.051582 kernel: Initialise system trusted keyrings Jul 2 07:04:15.051602 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 07:04:15.051617 kernel: Key type asymmetric registered Jul 2 07:04:15.051632 kernel: Asymmetric key parser 'x509' registered Jul 2 07:04:15.051647 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jul 2 07:04:15.051663 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:04:15.051678 kernel: io scheduler mq-deadline registered Jul 2 07:04:15.051707 kernel: io scheduler kyber registered Jul 2 07:04:15.051723 kernel: io scheduler bfq registered Jul 2 07:04:15.051738 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:04:15.051753 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:04:15.051772 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:04:15.051788 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:04:15.051803 kernel: i8042: Warning: Keylock active Jul 2 07:04:15.051819 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:04:15.051834 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:04:15.052034 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 2 07:04:15.052207 kernel: rtc_cmos 00:00: registered as rtc0 Jul 2 07:04:15.052336 kernel: rtc_cmos 00:00: setting system clock to 2024-07-02T07:04:14 UTC (1719903854) Jul 2 07:04:15.052450 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 2 07:04:15.052470 kernel: intel_pstate: CPU model not supported Jul 2 07:04:15.052485 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:04:15.052501 kernel: Segment Routing with IPv6 Jul 2 07:04:15.052515 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:04:15.052531 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:04:15.052546 kernel: Key type dns_resolver registered Jul 2 07:04:15.052561 kernel: IPI shorthand broadcast: enabled Jul 2 07:04:15.052629 kernel: sched_clock: Marking stable (607421245, 218079109)->(898495008, -72994654) Jul 2 07:04:15.052646 kernel: registered taskstats version 1 Jul 2 07:04:15.056720 kernel: Loading compiled-in X.509 certificates Jul 2 07:04:15.056755 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.96-flatcar: ad4c54fcfdf0a10b17828c4377e868762dc43797' Jul 2 07:04:15.056771 kernel: Key type .fscrypt registered Jul 2 07:04:15.056786 kernel: Key type fscrypt-provisioning registered Jul 2 07:04:15.056800 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 07:04:15.056816 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:04:15.056830 kernel: ima: No architecture policies found Jul 2 07:04:15.056852 kernel: clk: Disabling unused clocks Jul 2 07:04:15.056867 kernel: Freeing unused kernel image (initmem) memory: 47156K Jul 2 07:04:15.056882 kernel: Write protecting the kernel read-only data: 34816k Jul 2 07:04:15.056896 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:04:15.056910 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jul 2 07:04:15.056925 kernel: Run /init as init process Jul 2 07:04:15.056952 kernel: with arguments: Jul 2 07:04:15.056966 kernel: /init Jul 2 07:04:15.056980 kernel: with environment: Jul 2 07:04:15.056996 kernel: HOME=/ Jul 2 07:04:15.057101 kernel: TERM=linux Jul 2 07:04:15.057121 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:04:15.057142 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:04:15.057161 systemd[1]: Detected virtualization amazon. Jul 2 07:04:15.057177 systemd[1]: Detected architecture x86-64. Jul 2 07:04:15.057192 systemd[1]: Running in initrd. Jul 2 07:04:15.057483 systemd[1]: No hostname configured, using default hostname. Jul 2 07:04:15.057502 systemd[1]: Hostname set to . Jul 2 07:04:15.057519 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:04:15.057535 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:04:15.057551 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 07:04:15.057649 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 07:04:15.057666 systemd[1]: Reached target paths.target - Path Units. Jul 2 07:04:15.057681 systemd[1]: Reached target slices.target - Slice Units. Jul 2 07:04:15.057715 systemd[1]: Reached target swap.target - Swaps. Jul 2 07:04:15.057730 systemd[1]: Reached target timers.target - Timer Units. Jul 2 07:04:15.057747 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 07:04:15.057762 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 07:04:15.057778 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jul 2 07:04:15.057793 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 07:04:15.058011 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 07:04:15.058035 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 07:04:15.058051 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 07:04:15.058111 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 07:04:15.058169 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 07:04:15.058188 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 07:04:15.058205 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 07:04:15.058221 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:04:15.058237 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 07:04:15.058253 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 07:04:15.058273 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jul 2 07:04:15.058289 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 07:04:15.058348 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:04:15.058367 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 07:04:15.058393 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:04:15.058419 systemd-journald[180]: Journal started Jul 2 07:04:15.058546 systemd-journald[180]: Runtime Journal (/run/log/journal/ec2d9dc483d2cf5fc3b1578accd71b44) is 4.8M, max 38.6M, 33.8M free. Jul 2 07:04:15.027430 systemd-modules-load[181]: Inserted module 'overlay' Jul 2 07:04:15.181544 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:04:15.181568 kernel: Bridge firewalling registered Jul 2 07:04:15.181580 kernel: SCSI subsystem initialized Jul 2 07:04:15.181591 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:04:15.181602 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:04:15.181613 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jul 2 07:04:15.082105 systemd-modules-load[181]: Inserted module 'br_netfilter' Jul 2 07:04:15.144051 systemd-modules-load[181]: Inserted module 'dm_multipath' Jul 2 07:04:15.186698 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 07:04:15.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.188481 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 07:04:15.193301 kernel: audit: type=1130 audit(1719903855.187:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.193623 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 07:04:15.206645 kernel: audit: type=1130 audit(1719903855.192:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.206924 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 07:04:15.213716 kernel: audit: type=1130 audit(1719903855.206:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.226824 kernel: audit: type=1130 audit(1719903855.220:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.230975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 07:04:15.235509 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 07:04:15.237542 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 07:04:15.253481 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 07:04:15.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.258168 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 07:04:15.252000 audit: BPF prog-id=6 op=LOAD Jul 2 07:04:15.260370 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 07:04:15.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.263708 kernel: audit: type=1130 audit(1719903855.252:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.263736 kernel: audit: type=1334 audit(1719903855.252:7): prog-id=6 op=LOAD Jul 2 07:04:15.263752 kernel: audit: type=1130 audit(1719903855.258:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.271878 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 07:04:15.276997 kernel: audit: type=1130 audit(1719903855.273:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.284859 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 07:04:15.305713 dracut-cmdline[206]: dracut-dracut-053 Jul 2 07:04:15.305713 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 07:04:15.308100 systemd-resolved[200]: Positive Trust Anchors: Jul 2 07:04:15.308112 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:04:15.308161 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:04:15.312259 systemd-resolved[200]: Defaulting to hostname 'linux'. Jul 2 07:04:15.314169 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 07:04:15.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.326124 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 07:04:15.330579 kernel: audit: type=1130 audit(1719903855.325:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.404720 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:04:15.418716 kernel: iscsi: registered transport (tcp) Jul 2 07:04:15.444017 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:04:15.444899 kernel: QLogic iSCSI HBA Driver Jul 2 07:04:15.491054 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 07:04:15.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.499944 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 07:04:15.576370 kernel: raid6: avx512x4 gen() 16007 MB/s Jul 2 07:04:15.592746 kernel: raid6: avx512x2 gen() 14825 MB/s Jul 2 07:04:15.609772 kernel: raid6: avx512x1 gen() 12426 MB/s Jul 2 07:04:15.626745 kernel: raid6: avx2x4 gen() 16791 MB/s Jul 2 07:04:15.644745 kernel: raid6: avx2x2 gen() 14383 MB/s Jul 2 07:04:15.661746 kernel: raid6: avx2x1 gen() 10977 MB/s Jul 2 07:04:15.661822 kernel: raid6: using algorithm avx2x4 gen() 16791 MB/s Jul 2 07:04:15.678916 kernel: raid6: .... xor() 6719 MB/s, rmw enabled Jul 2 07:04:15.678989 kernel: raid6: using avx512x2 recovery algorithm Jul 2 07:04:15.682715 kernel: xor: automatically using best checksumming function avx Jul 2 07:04:15.866729 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:04:15.885203 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 07:04:15.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.887000 audit: BPF prog-id=7 op=LOAD Jul 2 07:04:15.887000 audit: BPF prog-id=8 op=LOAD Jul 2 07:04:15.893553 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 07:04:15.924308 systemd-udevd[382]: Using default interface naming scheme 'v252'. Jul 2 07:04:15.934803 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 07:04:15.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:15.944895 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 07:04:15.989808 dracut-pre-trigger[391]: rd.md=0: removing MD RAID activation Jul 2 07:04:16.042470 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 07:04:16.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:16.048046 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 07:04:16.116244 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 07:04:16.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:16.208712 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:04:16.211066 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 07:04:16.225343 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 07:04:16.225501 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 2 07:04:16.225634 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:ca:00:42:0e:b9 Jul 2 07:04:16.227421 (udev-worker)[426]: Network interface NamePolicy= disabled on kernel command line. Jul 2 07:04:16.417363 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:04:16.417399 kernel: AES CTR mode by8 optimization enabled Jul 2 07:04:16.417419 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 07:04:16.417658 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 07:04:16.417680 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 07:04:16.418069 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:04:16.418090 kernel: GPT:9289727 != 16777215 Jul 2 07:04:16.418107 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:04:16.418125 kernel: GPT:9289727 != 16777215 Jul 2 07:04:16.418142 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:04:16.418159 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 07:04:16.418185 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (437) Jul 2 07:04:16.418203 kernel: BTRFS: device fsid 1fca1e64-eeea-4360-9664-a9b6b3a60b6f devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (426) Jul 2 07:04:16.502278 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 07:04:16.518662 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 07:04:16.528112 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 07:04:16.533240 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 07:04:16.533363 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 07:04:16.550941 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 07:04:16.560198 disk-uuid[598]: Primary Header is updated. Jul 2 07:04:16.560198 disk-uuid[598]: Secondary Entries is updated. Jul 2 07:04:16.560198 disk-uuid[598]: Secondary Header is updated. Jul 2 07:04:16.566914 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 07:04:16.572712 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 07:04:16.578715 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 07:04:17.585726 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 07:04:17.586394 disk-uuid[599]: The operation has completed successfully. Jul 2 07:04:17.773205 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:04:17.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:17.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:17.773321 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 07:04:17.786079 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 07:04:17.791217 sh[941]: Success Jul 2 07:04:17.811350 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 07:04:17.921962 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 07:04:17.932596 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 07:04:17.939076 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 07:04:17.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:17.974764 kernel: BTRFS info (device dm-0): first mount of filesystem 1fca1e64-eeea-4360-9664-a9b6b3a60b6f Jul 2 07:04:17.974894 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:04:17.976194 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 07:04:17.976236 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 07:04:17.976816 kernel: BTRFS info (device dm-0): using free space tree Jul 2 07:04:18.011709 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 07:04:18.037897 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 07:04:18.038343 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 07:04:18.046234 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 07:04:18.049641 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 07:04:18.089055 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:04:18.089125 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:04:18.090715 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 07:04:18.112981 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 07:04:18.121878 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:04:18.125718 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:04:18.133778 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 07:04:18.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:18.139221 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 07:04:18.215921 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 07:04:18.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:18.219000 audit: BPF prog-id=9 op=LOAD Jul 2 07:04:18.233092 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 07:04:18.281499 systemd-networkd[1131]: lo: Link UP Jul 2 07:04:18.281511 systemd-networkd[1131]: lo: Gained carrier Jul 2 07:04:18.283253 systemd-networkd[1131]: Enumeration completed Jul 2 07:04:18.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:18.283379 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 07:04:18.284099 systemd-networkd[1131]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 07:04:18.284103 systemd-networkd[1131]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:04:18.285702 systemd[1]: Reached target network.target - Network. Jul 2 07:04:18.298562 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 07:04:18.300938 systemd-networkd[1131]: eth0: Link UP Jul 2 07:04:18.300943 systemd-networkd[1131]: eth0: Gained carrier Jul 2 07:04:18.300954 systemd-networkd[1131]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 07:04:18.324990 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 07:04:18.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:18.338104 systemd[1]: Starting iscsid.service - Open-iSCSI... Jul 2 07:04:18.345532 iscsid[1136]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:04:18.345532 iscsid[1136]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:04:18.345532 iscsid[1136]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:04:18.345532 iscsid[1136]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:04:18.345532 iscsid[1136]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:04:18.345532 iscsid[1136]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:04:18.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:18.345850 systemd[1]: Started iscsid.service - Open-iSCSI. Jul 2 07:04:18.345989 systemd-networkd[1131]: eth0: DHCPv4 address 172.31.25.147/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 07:04:18.351257 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 07:04:18.394508 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 07:04:18.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:18.398871 ignition[1077]: Ignition 2.15.0 Jul 2 07:04:18.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:18.396087 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 07:04:18.398883 ignition[1077]: Stage: fetch-offline Jul 2 07:04:18.397438 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 07:04:18.399130 ignition[1077]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:04:18.399121 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 07:04:18.399144 ignition[1077]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 07:04:18.407055 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 07:04:18.399790 ignition[1077]: Ignition finished successfully Jul 2 07:04:18.409615 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 07:04:18.412345 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 07:04:18.437904 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 07:04:18.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:18.444179 ignition[1151]: Ignition 2.15.0 Jul 2 07:04:18.444192 ignition[1151]: Stage: fetch Jul 2 07:04:18.444448 ignition[1151]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:04:18.444458 ignition[1151]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 07:04:18.444648 ignition[1151]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 07:04:18.459925 ignition[1151]: PUT result: OK Jul 2 07:04:18.463439 ignition[1151]: parsed url from cmdline: "" Jul 2 07:04:18.463450 ignition[1151]: no config URL provided Jul 2 07:04:18.463461 ignition[1151]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:04:18.463475 ignition[1151]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:04:18.463652 ignition[1151]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 07:04:18.480289 ignition[1151]: PUT result: OK Jul 2 07:04:18.482159 ignition[1151]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 07:04:18.492213 ignition[1151]: GET result: OK Jul 2 07:04:18.492333 ignition[1151]: parsing config with SHA512: e8501c7eaf43589491b6eee6080b642630a98daedc1b79ec14231fcb97c9e4d111a19f546c704d8d17c5db8e368e86441b665e127e83f953239551c2d80bc1df Jul 2 07:04:18.499565 unknown[1151]: fetched base config from "system" Jul 2 07:04:18.499587 unknown[1151]: fetched base config from "system" Jul 2 07:04:18.499600 unknown[1151]: fetched user config from "aws" Jul 2 07:04:18.501582 ignition[1151]: fetch: fetch complete Jul 2 07:04:18.501596 ignition[1151]: fetch: fetch passed Jul 2 07:04:18.501661 ignition[1151]: Ignition finished successfully Jul 2 07:04:18.508198 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 07:04:18.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:18.519957 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 07:04:18.541292 ignition[1161]: Ignition 2.15.0 Jul 2 07:04:18.541316 ignition[1161]: Stage: kargs Jul 2 07:04:18.541766 ignition[1161]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:04:18.541780 ignition[1161]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 07:04:18.542504 ignition[1161]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 07:04:18.547459 ignition[1161]: PUT result: OK Jul 2 07:04:18.551639 ignition[1161]: kargs: kargs passed Jul 2 07:04:18.551726 ignition[1161]: Ignition finished successfully Jul 2 07:04:18.554349 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 07:04:18.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:18.559926 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 07:04:18.577081 ignition[1167]: Ignition 2.15.0 Jul 2 07:04:18.577097 ignition[1167]: Stage: disks Jul 2 07:04:18.577593 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:04:18.577609 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 07:04:18.577841 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 07:04:18.580459 ignition[1167]: PUT result: OK Jul 2 07:04:18.585998 ignition[1167]: disks: disks passed Jul 2 07:04:18.586057 ignition[1167]: Ignition finished successfully Jul 2 07:04:18.589849 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 07:04:18.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:18.590098 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 07:04:18.593869 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 07:04:18.596423 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 07:04:18.598924 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 07:04:18.602857 systemd[1]: Reached target basic.target - Basic System. Jul 2 07:04:18.611065 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 07:04:18.630061 systemd-fsck[1175]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 07:04:18.640181 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 07:04:18.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:18.647009 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 07:04:18.791193 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Quota mode: none. Jul 2 07:04:18.792523 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 07:04:18.794507 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 07:04:18.806912 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 07:04:18.814279 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 07:04:18.822434 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 07:04:18.822825 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:04:18.822866 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 07:04:18.831131 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 07:04:18.832261 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1192) Jul 2 07:04:18.835264 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:04:18.835298 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:04:18.835311 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 07:04:18.836902 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 07:04:18.842707 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 07:04:18.843920 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 07:04:18.938969 initrd-setup-root[1217]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:04:18.944847 initrd-setup-root[1224]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:04:18.950984 initrd-setup-root[1231]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:04:18.957028 initrd-setup-root[1238]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:04:19.084483 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 07:04:19.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:19.089295 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 2 07:04:19.089336 kernel: audit: type=1130 audit(1719903859.083:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:19.090874 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 07:04:19.093774 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 07:04:19.114712 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:04:19.114899 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 07:04:19.166583 ignition[1304]: INFO : Ignition 2.15.0 Jul 2 07:04:19.168313 ignition[1304]: INFO : Stage: mount Jul 2 07:04:19.169729 ignition[1304]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:04:19.170997 ignition[1304]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 07:04:19.172403 ignition[1304]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 07:04:19.174961 ignition[1304]: INFO : PUT result: OK Jul 2 07:04:19.179518 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 07:04:19.184436 kernel: audit: type=1130 audit(1719903859.180:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:19.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:19.184545 ignition[1304]: INFO : mount: mount passed Jul 2 07:04:19.184545 ignition[1304]: INFO : Ignition finished successfully Jul 2 07:04:19.194325 kernel: audit: type=1130 audit(1719903859.183:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:19.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:19.181798 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 07:04:19.193950 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 07:04:19.218566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 07:04:19.231787 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1314) Jul 2 07:04:19.233999 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:04:19.234056 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:04:19.234074 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 07:04:19.244720 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 07:04:19.258411 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 07:04:19.291987 ignition[1332]: INFO : Ignition 2.15.0 Jul 2 07:04:19.291987 ignition[1332]: INFO : Stage: files Jul 2 07:04:19.295125 ignition[1332]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:04:19.295125 ignition[1332]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 07:04:19.295125 ignition[1332]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 07:04:19.295125 ignition[1332]: INFO : PUT result: OK Jul 2 07:04:19.302318 ignition[1332]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:04:19.306037 ignition[1332]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:04:19.306037 ignition[1332]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:04:19.313568 ignition[1332]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:04:19.315368 ignition[1332]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:04:19.315368 ignition[1332]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:04:19.315005 unknown[1332]: wrote ssh authorized keys file for user: core Jul 2 07:04:19.321807 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 07:04:19.325166 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 07:04:19.329983 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:04:19.329983 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:04:19.405766 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 2 07:04:19.541184 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:04:19.544114 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:04:19.546238 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:04:19.546238 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:04:19.551803 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:04:19.555897 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:04:19.555897 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:04:19.555897 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:04:19.555897 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:04:19.555897 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:04:19.555897 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:04:19.555897 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:04:19.555897 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:04:19.555897 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:04:19.555897 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 07:04:19.556376 systemd-networkd[1131]: eth0: Gained IPv6LL Jul 2 07:04:19.914819 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 2 07:04:20.326395 ignition[1332]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 07:04:20.326395 ignition[1332]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 2 07:04:20.330573 ignition[1332]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 07:04:20.335983 ignition[1332]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 07:04:20.335983 ignition[1332]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 2 07:04:20.335983 ignition[1332]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 2 07:04:20.341298 ignition[1332]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:04:20.341298 ignition[1332]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:04:20.341298 ignition[1332]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 2 07:04:20.341298 ignition[1332]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:04:20.349000 ignition[1332]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:04:20.349000 ignition[1332]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:04:20.349000 ignition[1332]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:04:20.349000 ignition[1332]: INFO : files: files passed Jul 2 07:04:20.349000 ignition[1332]: INFO : Ignition finished successfully Jul 2 07:04:20.359050 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 07:04:20.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.363714 kernel: audit: type=1130 audit(1719903860.359:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.374131 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 07:04:20.379685 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 07:04:20.383901 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:04:20.384173 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 07:04:20.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.394064 kernel: audit: type=1130 audit(1719903860.387:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.394110 kernel: audit: type=1131 audit(1719903860.387:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.398253 initrd-setup-root-after-ignition[1358]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:04:20.400915 initrd-setup-root-after-ignition[1358]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:04:20.405413 initrd-setup-root-after-ignition[1362]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:04:20.409229 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 07:04:20.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.409435 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 07:04:20.417220 kernel: audit: type=1130 audit(1719903860.408:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.425901 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 07:04:20.450553 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:04:20.450725 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 07:04:20.477882 kernel: audit: type=1130 audit(1719903860.460:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.477918 kernel: audit: type=1131 audit(1719903860.460:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.461181 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 07:04:20.487261 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 07:04:20.490661 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 07:04:20.498418 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 07:04:20.518883 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 07:04:20.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.524737 kernel: audit: type=1130 audit(1719903860.518:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.526250 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 07:04:20.539843 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 07:04:20.540080 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 07:04:20.544884 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 07:04:20.547069 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:04:20.549117 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 07:04:20.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.559642 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 07:04:20.566431 systemd[1]: Stopped target basic.target - Basic System. Jul 2 07:04:20.569852 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 07:04:20.572955 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 07:04:20.575238 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 07:04:20.578644 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 07:04:20.581853 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 07:04:20.584508 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 07:04:20.587648 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 07:04:20.589898 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jul 2 07:04:20.592940 systemd[1]: Stopped target swap.target - Swaps. Jul 2 07:04:20.594628 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:04:20.594883 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 07:04:20.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.599808 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 07:04:20.601206 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:04:20.603386 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 07:04:20.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.609928 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:04:20.610135 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 07:04:20.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.612547 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:04:20.614080 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 07:04:20.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.625213 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 07:04:20.628994 iscsid[1136]: iscsid shutting down. Jul 2 07:04:20.630221 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jul 2 07:04:20.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.632119 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:04:20.633204 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 07:04:20.643127 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 07:04:20.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.644399 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:04:20.644625 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 07:04:20.646415 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:04:20.646597 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 07:04:20.661560 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:04:20.663345 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jul 2 07:04:20.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.665993 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 07:04:20.668415 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:04:20.669425 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 07:04:20.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.674325 ignition[1376]: INFO : Ignition 2.15.0 Jul 2 07:04:20.675441 ignition[1376]: INFO : Stage: umount Jul 2 07:04:20.675441 ignition[1376]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:04:20.675441 ignition[1376]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 07:04:20.678816 ignition[1376]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 07:04:20.678816 ignition[1376]: INFO : PUT result: OK Jul 2 07:04:20.680906 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:04:20.681962 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 07:04:20.683766 ignition[1376]: INFO : umount: umount passed Jul 2 07:04:20.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.692338 ignition[1376]: INFO : Ignition finished successfully Jul 2 07:04:20.692720 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:04:20.695684 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:04:20.695809 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 07:04:20.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.704836 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:04:20.704935 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 07:04:20.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.708332 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:04:20.709437 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 07:04:20.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.712499 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 07:04:20.712728 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 07:04:20.714758 systemd[1]: Stopped target network.target - Network. Jul 2 07:04:20.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.716470 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:04:20.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.716541 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 07:04:20.718789 systemd[1]: Stopped target paths.target - Path Units. Jul 2 07:04:20.720902 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:04:20.721151 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 07:04:20.724255 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 07:04:20.725260 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 07:04:20.731541 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:04:20.731612 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 07:04:20.735593 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:04:20.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.736541 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 07:04:20.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.737914 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:04:20.737977 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 07:04:20.741504 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 07:04:20.743410 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 07:04:20.743992 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:04:20.744086 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 07:04:20.744410 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:04:20.744445 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 07:04:20.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.758000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:04:20.749583 systemd-networkd[1131]: eth0: DHCPv6 lease lost Jul 2 07:04:20.756322 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:04:20.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.763000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:04:20.756431 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 07:04:20.759514 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:04:20.759611 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 07:04:20.764099 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:04:20.764144 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 07:04:20.772431 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 07:04:20.774560 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:04:20.774649 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 07:04:20.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.778197 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:04:20.779155 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 07:04:20.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.781376 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:04:20.781444 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 07:04:20.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.782614 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 07:04:20.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.782660 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 07:04:20.789747 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 07:04:20.796575 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 2 07:04:20.796663 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:04:20.803543 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:04:20.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.803768 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 07:04:20.806447 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:04:20.806532 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 07:04:20.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.810496 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:04:20.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.810531 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 07:04:20.811768 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:04:20.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.811828 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 07:04:20.813882 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:04:20.813934 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 07:04:20.816975 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:04:20.817032 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 07:04:20.830876 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 07:04:20.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.832010 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:04:20.832124 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 07:04:20.834460 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:04:20.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:20.834648 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 07:04:20.838935 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:04:20.839025 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 07:04:20.842151 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 07:04:20.854092 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 07:04:20.866916 systemd[1]: Switching root. Jul 2 07:04:20.871000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:04:20.871000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:04:20.871000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:04:20.873000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:04:20.873000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:04:20.889428 systemd-journald[180]: Journal stopped Jul 2 07:04:22.213630 systemd-journald[180]: Received SIGTERM from PID 1 (systemd). Jul 2 07:04:22.213739 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jul 2 07:04:22.213765 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:04:22.213790 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:04:22.213809 kernel: SELinux: policy capability open_perms=1 Jul 2 07:04:22.213832 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:04:22.213856 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:04:22.213874 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:04:22.213892 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:04:22.213909 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:04:22.213927 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:04:22.213951 systemd[1]: Successfully loaded SELinux policy in 44.423ms. Jul 2 07:04:22.213983 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.886ms. Jul 2 07:04:22.214005 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:04:22.214025 systemd[1]: Detected virtualization amazon. Jul 2 07:04:22.214046 systemd[1]: Detected architecture x86-64. Jul 2 07:04:22.214115 systemd[1]: Detected first boot. Jul 2 07:04:22.214136 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:04:22.214157 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:04:22.214177 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:04:22.214253 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 07:04:22.214275 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 07:04:22.214299 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 07:04:22.214320 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 07:04:22.214340 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 07:04:22.214367 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 07:04:22.214386 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 07:04:22.214404 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 07:04:22.214422 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 07:04:22.214445 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 07:04:22.214464 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 07:04:22.214492 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 07:04:22.214516 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 07:04:22.214538 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 07:04:22.214562 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 07:04:22.214585 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 07:04:22.214606 systemd[1]: Reached target slices.target - Slice Units. Jul 2 07:04:22.214629 systemd[1]: Reached target swap.target - Swaps. Jul 2 07:04:22.214653 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 07:04:22.214676 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 07:04:22.228950 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jul 2 07:04:22.229036 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jul 2 07:04:22.229063 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 07:04:22.229085 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 07:04:22.229109 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 07:04:22.229133 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 07:04:22.229213 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 07:04:22.229249 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 07:04:22.229279 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 07:04:22.229303 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 07:04:22.229328 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 07:04:22.229351 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:04:22.229381 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 07:04:22.229404 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 07:04:22.229429 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 07:04:22.229453 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 07:04:22.229479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 07:04:22.229503 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 07:04:22.229588 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 07:04:22.229622 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 07:04:22.229645 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 07:04:22.229670 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 07:04:22.231997 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 07:04:22.232040 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 07:04:22.232061 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:04:22.232086 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 07:04:22.232105 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 2 07:04:22.232123 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 07:04:22.235220 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 07:04:22.235255 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 07:04:22.235275 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 07:04:22.235293 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 07:04:22.235312 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:04:22.235337 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 07:04:22.235738 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 07:04:22.235762 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 07:04:22.235782 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 07:04:22.235801 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 07:04:22.235819 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 07:04:22.235837 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 07:04:22.239520 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:04:22.239559 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 07:04:22.239587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:04:22.239608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 07:04:22.239630 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:04:22.239650 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 07:04:22.239670 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 07:04:22.239749 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 07:04:22.239771 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 07:04:22.239793 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 07:04:22.239814 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 07:04:22.239839 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 07:04:22.239859 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 07:04:22.239882 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:04:22.239904 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 07:04:22.239926 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:04:22.239947 kernel: loop: module loaded Jul 2 07:04:22.239970 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jul 2 07:04:22.239990 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:04:22.240010 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 07:04:22.240029 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 07:04:22.240073 systemd-journald[1513]: Journal started Jul 2 07:04:22.240956 systemd-journald[1513]: Runtime Journal (/run/log/journal/ec2d9dc483d2cf5fc3b1578accd71b44) is 4.8M, max 38.6M, 33.8M free. Jul 2 07:04:21.882000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:04:21.882000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 2 07:04:22.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.242899 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 07:04:22.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.201000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:04:22.201000 audit[1513]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd82e0ce90 a2=4000 a3=7ffd82e0cf2c items=0 ppid=1 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:22.201000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:04:22.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.270950 systemd-journald[1513]: Time spent on flushing to /var/log/journal/ec2d9dc483d2cf5fc3b1578accd71b44 is 141.345ms for 1050 entries. Jul 2 07:04:22.270950 systemd-journald[1513]: System Journal (/var/log/journal/ec2d9dc483d2cf5fc3b1578accd71b44) is 8.0M, max 195.6M, 187.6M free. Jul 2 07:04:22.420773 systemd-journald[1513]: Received client request to flush runtime journal. Jul 2 07:04:22.420856 kernel: fuse: init (API version 7.37) Jul 2 07:04:22.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.248889 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 07:04:22.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.251590 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jul 2 07:04:22.254179 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 07:04:22.270854 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 07:04:22.294673 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:04:22.294923 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 07:04:22.306442 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 07:04:22.314381 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 07:04:22.396877 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 07:04:22.403882 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 07:04:22.423030 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 07:04:22.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.459390 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 07:04:22.465920 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 07:04:22.487717 kernel: ACPI: bus type drm_connector registered Jul 2 07:04:22.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.488728 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:04:22.490836 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 07:04:22.498156 udevadm[1565]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 07:04:22.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:22.536238 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 07:04:22.543223 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 07:04:22.594803 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 07:04:22.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:23.344379 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 07:04:23.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:23.350993 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 07:04:23.391739 systemd-udevd[1575]: Using default interface naming scheme 'v252'. Jul 2 07:04:23.431343 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 07:04:23.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:23.437855 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 07:04:23.454261 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 07:04:23.536332 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 2 07:04:23.562163 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 07:04:23.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:23.600579 (udev-worker)[1581]: Network interface NamePolicy= disabled on kernel command line. Jul 2 07:04:23.602854 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1577) Jul 2 07:04:23.705811 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1591) Jul 2 07:04:23.722394 systemd-networkd[1583]: lo: Link UP Jul 2 07:04:23.722408 systemd-networkd[1583]: lo: Gained carrier Jul 2 07:04:23.723349 systemd-networkd[1583]: Enumeration completed Jul 2 07:04:23.723497 systemd-networkd[1583]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 07:04:23.723502 systemd-networkd[1583]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:04:23.723530 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 07:04:23.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:23.730156 systemd-networkd[1583]: eth0: Link UP Jul 2 07:04:23.730356 systemd-networkd[1583]: eth0: Gained carrier Jul 2 07:04:23.730383 systemd-networkd[1583]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 07:04:23.730713 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:04:23.731988 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 07:04:23.739014 systemd-networkd[1583]: eth0: DHCPv4 address 172.31.25.147/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 07:04:23.775714 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 07:04:23.779727 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:04:23.789159 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 2 07:04:23.796762 kernel: ACPI: button: Sleep Button [SLPF] Jul 2 07:04:23.866717 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jul 2 07:04:23.933531 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 07:04:23.956720 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jul 2 07:04:23.968746 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:04:24.097303 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 07:04:24.104474 kernel: kauditd_printk_skb: 76 callbacks suppressed Jul 2 07:04:24.104625 kernel: audit: type=1130 audit(1719903864.098:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.107353 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 07:04:24.129264 lvm[1691]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:04:24.158401 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 07:04:24.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.159960 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 07:04:24.166133 kernel: audit: type=1130 audit(1719903864.159:119): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.166979 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 07:04:24.174368 lvm[1694]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:04:24.199138 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 07:04:24.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.200708 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 07:04:24.205062 kernel: audit: type=1130 audit(1719903864.199:120): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.204932 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:04:24.204969 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 07:04:24.206849 systemd[1]: Reached target machines.target - Containers. Jul 2 07:04:24.218100 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 07:04:24.221490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 07:04:24.221609 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:04:24.224033 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jul 2 07:04:24.227340 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 07:04:24.232241 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 07:04:24.245134 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 07:04:24.255156 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1697 (bootctl) Jul 2 07:04:24.257774 kernel: loop0: detected capacity change from 0 to 80600 Jul 2 07:04:24.259911 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jul 2 07:04:24.266144 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 07:04:24.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.270706 kernel: audit: type=1130 audit(1719903864.266:121): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.322721 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:04:24.349737 kernel: loop1: detected capacity change from 0 to 139360 Jul 2 07:04:24.403936 kernel: loop2: detected capacity change from 0 to 209816 Jul 2 07:04:24.442838 systemd-fsck[1707]: fsck.fat 4.2 (2021-01-31) Jul 2 07:04:24.442838 systemd-fsck[1707]: /dev/nvme0n1p1: 808 files, 120378/258078 clusters Jul 2 07:04:24.447103 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jul 2 07:04:24.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.457943 kernel: audit: type=1130 audit(1719903864.448:122): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.457852 systemd[1]: Mounting boot.mount - Boot partition... Jul 2 07:04:24.477877 systemd[1]: Mounted boot.mount - Boot partition. Jul 2 07:04:24.532917 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jul 2 07:04:24.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.537714 kernel: audit: type=1130 audit(1719903864.534:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.575720 kernel: loop3: detected capacity change from 0 to 60984 Jul 2 07:04:24.626742 kernel: loop4: detected capacity change from 0 to 80600 Jul 2 07:04:24.657718 kernel: loop5: detected capacity change from 0 to 139360 Jul 2 07:04:24.697895 kernel: loop6: detected capacity change from 0 to 209816 Jul 2 07:04:24.744269 kernel: loop7: detected capacity change from 0 to 60984 Jul 2 07:04:24.748746 kernel: audit: type=1130 audit(1719903864.745:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.741939 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:04:24.744167 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 07:04:24.776457 (sd-sysext)[1728]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 07:04:24.777522 (sd-sysext)[1728]: Merged extensions into '/usr'. Jul 2 07:04:24.780006 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 07:04:24.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.783827 kernel: audit: type=1130 audit(1719903864.780:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:24.785059 systemd[1]: Starting ensure-sysext.service... Jul 2 07:04:24.788734 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 07:04:24.841999 systemd[1]: Reloading. Jul 2 07:04:24.850280 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:04:24.853994 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:04:24.855154 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 07:04:24.865364 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:04:24.866792 systemd-networkd[1583]: eth0: Gained IPv6LL Jul 2 07:04:24.973513 ldconfig[1696]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:04:25.151032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:04:25.242551 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 07:04:25.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.246908 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 07:04:25.251023 kernel: audit: type=1130 audit(1719903865.244:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.254725 kernel: audit: type=1130 audit(1719903865.250:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.259880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 07:04:25.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.265635 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 07:04:25.274062 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 07:04:25.278171 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 07:04:25.293097 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 07:04:25.304945 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 07:04:25.309141 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 07:04:25.322783 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:04:25.323286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 07:04:25.330000 audit[1826]: SYSTEM_BOOT pid=1826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.332095 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 07:04:25.335942 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 07:04:25.340880 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 07:04:25.342441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 07:04:25.342789 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:04:25.343012 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:04:25.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.344728 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:04:25.345185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 07:04:25.354628 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:04:25.355568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 07:04:25.362456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 07:04:25.363684 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 07:04:25.363988 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:04:25.364268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:04:25.372882 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:04:25.373875 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 07:04:25.382328 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 07:04:25.383794 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 07:04:25.384131 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:04:25.384439 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:04:25.386095 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 07:04:25.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.388112 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:04:25.388334 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 07:04:25.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.399304 systemd[1]: Finished ensure-sysext.service. Jul 2 07:04:25.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.401226 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 07:04:25.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.403147 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:04:25.403377 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 07:04:25.406763 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:04:25.414085 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 07:04:25.425045 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:04:25.425292 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 07:04:25.426920 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 07:04:25.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.448763 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:04:25.448987 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 07:04:25.450838 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 07:04:25.484659 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 07:04:25.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:25.486572 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:04:25.525000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:04:25.525000 audit[1849]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe45ae66e0 a2=420 a3=0 items=0 ppid=1812 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:25.525000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:04:25.527208 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 07:04:25.527675 augenrules[1849]: No rules Jul 2 07:04:25.580884 systemd-resolved[1821]: Positive Trust Anchors: Jul 2 07:04:25.580906 systemd-resolved[1821]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:04:25.581040 systemd-resolved[1821]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:04:25.590351 systemd-resolved[1821]: Defaulting to hostname 'linux'. Jul 2 07:04:25.590719 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 07:04:25.592435 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 07:04:25.596264 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 07:04:25.597647 systemd[1]: Reached target network.target - Network. Jul 2 07:04:25.598662 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 07:04:25.599811 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 07:04:25.600945 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 07:04:25.602200 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 07:04:25.604318 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 07:04:25.606229 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 07:04:25.607537 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 07:04:25.608678 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 07:04:25.609847 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:04:25.609888 systemd[1]: Reached target paths.target - Path Units. Jul 2 07:04:25.610974 systemd[1]: Reached target timers.target - Timer Units. Jul 2 07:04:25.613212 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 07:04:25.616020 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 07:04:25.619037 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 07:04:25.620290 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:04:25.624727 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 07:04:25.625929 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 07:04:25.627178 systemd[1]: Reached target basic.target - Basic System. Jul 2 07:04:25.628836 systemd[1]: System is tainted: cgroupsv1 Jul 2 07:04:25.628885 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 07:04:25.628905 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 07:04:25.631319 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 07:04:25.635243 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 07:04:25.637795 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 07:04:25.640465 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 07:04:25.646565 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 07:04:25.647949 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 07:04:25.650600 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:04:25.653521 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 07:04:25.669444 jq[1862]: false Jul 2 07:04:25.675226 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 07:04:25.680662 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 07:04:25.685785 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 07:04:25.688524 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 07:04:26.699831 systemd-timesyncd[1822]: Contacted time server 148.135.68.31:123 (0.flatcar.pool.ntp.org). Jul 2 07:04:26.699882 systemd-timesyncd[1822]: Initial clock synchronization to Tue 2024-07-02 07:04:26.699725 UTC. Jul 2 07:04:26.706736 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 07:04:26.711159 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 07:04:26.715764 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:04:26.715959 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 07:04:26.782898 jq[1879]: true Jul 2 07:04:26.718472 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 07:04:26.733720 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 07:04:26.752017 systemd-resolved[1821]: Clock change detected. Flushing caches. Jul 2 07:04:26.783541 jq[1890]: true Jul 2 07:04:26.753653 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:04:26.753992 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 07:04:26.769376 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:04:26.770518 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 07:04:26.814828 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 07:04:26.818708 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 07:04:26.851732 update_engine[1878]: I0702 07:04:26.851658 1878 main.cc:92] Flatcar Update Engine starting Jul 2 07:04:26.868989 tar[1887]: linux-amd64/helm Jul 2 07:04:26.897826 dbus-daemon[1861]: [system] SELinux support is enabled Jul 2 07:04:26.898138 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 07:04:26.902464 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:04:26.902503 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 07:04:26.903936 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:04:26.903972 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 07:04:26.945524 dbus-daemon[1861]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1583 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 07:04:26.955822 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 07:04:26.972020 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:04:26.972425 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 07:04:26.979490 update_engine[1878]: I0702 07:04:26.976292 1878 update_check_scheduler.cc:74] Next update check in 10m28s Jul 2 07:04:26.976683 systemd[1]: Started update-engine.service - Update Engine. Jul 2 07:04:26.978745 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:04:26.982825 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 07:04:27.003311 extend-filesystems[1863]: Found loop4 Jul 2 07:04:27.004628 extend-filesystems[1863]: Found loop5 Jul 2 07:04:27.004628 extend-filesystems[1863]: Found loop6 Jul 2 07:04:27.004628 extend-filesystems[1863]: Found loop7 Jul 2 07:04:27.004628 extend-filesystems[1863]: Found nvme0n1 Jul 2 07:04:27.004628 extend-filesystems[1863]: Found nvme0n1p1 Jul 2 07:04:27.004628 extend-filesystems[1863]: Found nvme0n1p2 Jul 2 07:04:27.015881 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 07:04:27.024014 extend-filesystems[1863]: Found nvme0n1p3 Jul 2 07:04:27.024014 extend-filesystems[1863]: Found usr Jul 2 07:04:27.024014 extend-filesystems[1863]: Found nvme0n1p4 Jul 2 07:04:27.024014 extend-filesystems[1863]: Found nvme0n1p6 Jul 2 07:04:27.024014 extend-filesystems[1863]: Found nvme0n1p7 Jul 2 07:04:27.024014 extend-filesystems[1863]: Found nvme0n1p9 Jul 2 07:04:27.024014 extend-filesystems[1863]: Checking size of /dev/nvme0n1p9 Jul 2 07:04:27.083305 extend-filesystems[1863]: Resized partition /dev/nvme0n1p9 Jul 2 07:04:27.084471 bash[1932]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:04:27.085468 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 07:04:27.092784 systemd[1]: Starting sshkeys.service... Jul 2 07:04:27.120172 extend-filesystems[1939]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 07:04:27.125375 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 07:04:27.134357 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 07:04:27.135567 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 07:04:27.198132 systemd-logind[1877]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 07:04:27.198165 systemd-logind[1877]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 07:04:27.198189 systemd-logind[1877]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:04:27.205083 systemd-logind[1877]: New seat seat0. Jul 2 07:04:27.214234 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 07:04:27.217728 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 07:04:27.267536 extend-filesystems[1939]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 07:04:27.267536 extend-filesystems[1939]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 07:04:27.267536 extend-filesystems[1939]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 07:04:27.272474 extend-filesystems[1863]: Resized filesystem in /dev/nvme0n1p9 Jul 2 07:04:27.275056 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:04:27.275474 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 07:04:27.354744 amazon-ssm-agent[1902]: Initializing new seelog logger Jul 2 07:04:27.360624 amazon-ssm-agent[1902]: New Seelog Logger Creation Complete Jul 2 07:04:27.360792 amazon-ssm-agent[1902]: 2024/07/02 07:04:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 07:04:27.360792 amazon-ssm-agent[1902]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 07:04:27.361376 amazon-ssm-agent[1902]: 2024/07/02 07:04:27 processing appconfig overrides Jul 2 07:04:27.415297 amazon-ssm-agent[1902]: 2024/07/02 07:04:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 07:04:27.415297 amazon-ssm-agent[1902]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 07:04:27.415471 amazon-ssm-agent[1902]: 2024/07/02 07:04:27 processing appconfig overrides Jul 2 07:04:27.415772 amazon-ssm-agent[1902]: 2024/07/02 07:04:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 07:04:27.415772 amazon-ssm-agent[1902]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 07:04:27.415869 amazon-ssm-agent[1902]: 2024/07/02 07:04:27 processing appconfig overrides Jul 2 07:04:27.416369 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO Proxy environment variables: Jul 2 07:04:27.448924 amazon-ssm-agent[1902]: 2024/07/02 07:04:27 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 07:04:27.448924 amazon-ssm-agent[1902]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 07:04:27.449221 amazon-ssm-agent[1902]: 2024/07/02 07:04:27 processing appconfig overrides Jul 2 07:04:27.568070 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO http_proxy: Jul 2 07:04:27.674662 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO no_proxy: Jul 2 07:04:27.778018 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO https_proxy: Jul 2 07:04:27.872690 dbus-daemon[1861]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 07:04:27.872863 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 07:04:27.881068 dbus-daemon[1861]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1923 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 07:04:27.889155 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 07:04:27.901156 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO Checking if agent identity type OnPrem can be assumed Jul 2 07:04:27.910227 polkitd[1974]: Started polkitd version 121 Jul 2 07:04:27.928240 polkitd[1974]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 07:04:27.928461 polkitd[1974]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 07:04:27.929215 polkitd[1974]: Finished loading, compiling and executing 2 rules Jul 2 07:04:27.929888 dbus-daemon[1861]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 07:04:27.930081 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 07:04:27.934064 polkitd[1974]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 07:04:27.967581 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1969) Jul 2 07:04:28.014136 systemd-hostnamed[1923]: Hostname set to (transient) Jul 2 07:04:28.014869 systemd-resolved[1821]: System hostname changed to 'ip-172-31-25-147'. Jul 2 07:04:28.023410 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO Checking if agent identity type EC2 can be assumed Jul 2 07:04:28.052628 locksmithd[1928]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:04:28.062600 coreos-metadata[1860]: Jul 02 07:04:28.049 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 07:04:28.080520 coreos-metadata[1860]: Jul 02 07:04:28.080 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 07:04:28.081399 coreos-metadata[1860]: Jul 02 07:04:28.081 INFO Fetch successful Jul 2 07:04:28.081541 coreos-metadata[1860]: Jul 02 07:04:28.081 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 07:04:28.082378 coreos-metadata[1860]: Jul 02 07:04:28.082 INFO Fetch successful Jul 2 07:04:28.082515 coreos-metadata[1860]: Jul 02 07:04:28.082 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 07:04:28.083373 coreos-metadata[1860]: Jul 02 07:04:28.083 INFO Fetch successful Jul 2 07:04:28.083517 coreos-metadata[1860]: Jul 02 07:04:28.083 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 07:04:28.084499 coreos-metadata[1860]: Jul 02 07:04:28.084 INFO Fetch successful Jul 2 07:04:28.084661 coreos-metadata[1860]: Jul 02 07:04:28.084 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 07:04:28.092832 coreos-metadata[1860]: Jul 02 07:04:28.092 INFO Fetch failed with 404: resource not found Jul 2 07:04:28.093008 coreos-metadata[1860]: Jul 02 07:04:28.092 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 07:04:28.101308 coreos-metadata[1860]: Jul 02 07:04:28.101 INFO Fetch successful Jul 2 07:04:28.101912 coreos-metadata[1860]: Jul 02 07:04:28.101 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 07:04:28.110289 coreos-metadata[1860]: Jul 02 07:04:28.110 INFO Fetch successful Jul 2 07:04:28.110472 coreos-metadata[1860]: Jul 02 07:04:28.110 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 07:04:28.112505 coreos-metadata[1860]: Jul 02 07:04:28.112 INFO Fetch successful Jul 2 07:04:28.112689 coreos-metadata[1860]: Jul 02 07:04:28.112 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 07:04:28.114023 coreos-metadata[1860]: Jul 02 07:04:28.113 INFO Fetch successful Jul 2 07:04:28.114270 coreos-metadata[1860]: Jul 02 07:04:28.114 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 07:04:28.123968 coreos-metadata[1860]: Jul 02 07:04:28.123 INFO Fetch successful Jul 2 07:04:28.173914 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO Agent will take identity from EC2 Jul 2 07:04:28.186941 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 07:04:28.189061 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 07:04:28.278006 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 07:04:28.319884 containerd[1899]: time="2024-07-02T07:04:28.319715947Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jul 2 07:04:28.383845 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 07:04:28.486789 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 07:04:28.570688 coreos-metadata[1945]: Jul 02 07:04:28.560 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 07:04:28.580671 coreos-metadata[1945]: Jul 02 07:04:28.580 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 07:04:28.581429 coreos-metadata[1945]: Jul 02 07:04:28.581 INFO Fetch successful Jul 2 07:04:28.581429 coreos-metadata[1945]: Jul 02 07:04:28.581 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 07:04:28.582432 coreos-metadata[1945]: Jul 02 07:04:28.582 INFO Fetch successful Jul 2 07:04:28.584977 containerd[1899]: time="2024-07-02T07:04:28.584927461Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 07:04:28.585122 containerd[1899]: time="2024-07-02T07:04:28.585104671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:04:28.585963 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 07:04:28.589068 unknown[1945]: wrote ssh authorized keys file for user: core Jul 2 07:04:28.597092 containerd[1899]: time="2024-07-02T07:04:28.597040070Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:04:28.620647 containerd[1899]: time="2024-07-02T07:04:28.620456613Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:04:28.623280 containerd[1899]: time="2024-07-02T07:04:28.621358991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:04:28.623280 containerd[1899]: time="2024-07-02T07:04:28.621403593Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:04:28.623280 containerd[1899]: time="2024-07-02T07:04:28.621691410Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 07:04:28.623280 containerd[1899]: time="2024-07-02T07:04:28.621780564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:04:28.623280 containerd[1899]: time="2024-07-02T07:04:28.621800921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:04:28.623280 containerd[1899]: time="2024-07-02T07:04:28.621913289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:04:28.623280 containerd[1899]: time="2024-07-02T07:04:28.622326876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:04:28.623280 containerd[1899]: time="2024-07-02T07:04:28.622357168Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:04:28.623280 containerd[1899]: time="2024-07-02T07:04:28.622375893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:04:28.623280 containerd[1899]: time="2024-07-02T07:04:28.623229368Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:04:28.625705 containerd[1899]: time="2024-07-02T07:04:28.623663008Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:04:28.625705 containerd[1899]: time="2024-07-02T07:04:28.625082345Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:04:28.625705 containerd[1899]: time="2024-07-02T07:04:28.625112165Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:04:28.646648 update-ssh-keys[2062]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:04:28.647583 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 07:04:28.653799 systemd[1]: Finished sshkeys.service. Jul 2 07:04:28.660149 containerd[1899]: time="2024-07-02T07:04:28.660025887Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:04:28.660149 containerd[1899]: time="2024-07-02T07:04:28.660118476Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:04:28.660747 containerd[1899]: time="2024-07-02T07:04:28.660429159Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:04:28.660747 containerd[1899]: time="2024-07-02T07:04:28.660509741Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 07:04:28.660747 containerd[1899]: time="2024-07-02T07:04:28.660535722Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 07:04:28.660747 containerd[1899]: time="2024-07-02T07:04:28.660601857Z" level=info msg="NRI interface is disabled by configuration." Jul 2 07:04:28.661074 containerd[1899]: time="2024-07-02T07:04:28.660621887Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:04:28.661264 containerd[1899]: time="2024-07-02T07:04:28.661244126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 07:04:28.661473 containerd[1899]: time="2024-07-02T07:04:28.661363156Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 07:04:28.661615 containerd[1899]: time="2024-07-02T07:04:28.661595582Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 07:04:28.665607 containerd[1899]: time="2024-07-02T07:04:28.661705017Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 07:04:28.667594 containerd[1899]: time="2024-07-02T07:04:28.666651403Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:04:28.667594 containerd[1899]: time="2024-07-02T07:04:28.666698563Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:04:28.667594 containerd[1899]: time="2024-07-02T07:04:28.666721626Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:04:28.667594 containerd[1899]: time="2024-07-02T07:04:28.666741505Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:04:28.667594 containerd[1899]: time="2024-07-02T07:04:28.666764745Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:04:28.667594 containerd[1899]: time="2024-07-02T07:04:28.666786762Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:04:28.667594 containerd[1899]: time="2024-07-02T07:04:28.666806374Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:04:28.667594 containerd[1899]: time="2024-07-02T07:04:28.666824088Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:04:28.667594 containerd[1899]: time="2024-07-02T07:04:28.666991584Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:04:28.669721 containerd[1899]: time="2024-07-02T07:04:28.669691599Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:04:28.669856 containerd[1899]: time="2024-07-02T07:04:28.669836670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.669960 containerd[1899]: time="2024-07-02T07:04:28.669943993Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 07:04:28.670057 containerd[1899]: time="2024-07-02T07:04:28.670040173Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:04:28.671312 containerd[1899]: time="2024-07-02T07:04:28.671291935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.671469 containerd[1899]: time="2024-07-02T07:04:28.671451878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.671562 containerd[1899]: time="2024-07-02T07:04:28.671536355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.671646 containerd[1899]: time="2024-07-02T07:04:28.671631878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.671715 containerd[1899]: time="2024-07-02T07:04:28.671703067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.671802 containerd[1899]: time="2024-07-02T07:04:28.671789189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.671874 containerd[1899]: time="2024-07-02T07:04:28.671862190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.671951 containerd[1899]: time="2024-07-02T07:04:28.671938622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.672018 containerd[1899]: time="2024-07-02T07:04:28.672006097Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:04:28.672238 containerd[1899]: time="2024-07-02T07:04:28.672223692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.672317 containerd[1899]: time="2024-07-02T07:04:28.672303655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.672384 containerd[1899]: time="2024-07-02T07:04:28.672371781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.672451 containerd[1899]: time="2024-07-02T07:04:28.672439534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.672523 containerd[1899]: time="2024-07-02T07:04:28.672511285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.676394 containerd[1899]: time="2024-07-02T07:04:28.672896934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.676394 containerd[1899]: time="2024-07-02T07:04:28.672949874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.676394 containerd[1899]: time="2024-07-02T07:04:28.672994451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:04:28.676548 containerd[1899]: time="2024-07-02T07:04:28.673461749Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:04:28.676548 containerd[1899]: time="2024-07-02T07:04:28.673581923Z" level=info msg="Connect containerd service" Jul 2 07:04:28.676548 containerd[1899]: time="2024-07-02T07:04:28.673709385Z" level=info msg="using legacy CRI server" Jul 2 07:04:28.676548 containerd[1899]: time="2024-07-02T07:04:28.673721578Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 07:04:28.676548 containerd[1899]: time="2024-07-02T07:04:28.673782688Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:04:28.682506 containerd[1899]: time="2024-07-02T07:04:28.682449719Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:04:28.682904 containerd[1899]: time="2024-07-02T07:04:28.682873145Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:04:28.683809 containerd[1899]: time="2024-07-02T07:04:28.683760076Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:04:28.685402 containerd[1899]: time="2024-07-02T07:04:28.685357418Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:04:28.688228 containerd[1899]: time="2024-07-02T07:04:28.688197121Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:04:28.689030 containerd[1899]: time="2024-07-02T07:04:28.689008291Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:04:28.689294 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 2 07:04:28.690250 containerd[1899]: time="2024-07-02T07:04:28.690225978Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:04:28.718150 containerd[1899]: time="2024-07-02T07:04:28.684339611Z" level=info msg="Start subscribing containerd event" Jul 2 07:04:28.718470 containerd[1899]: time="2024-07-02T07:04:28.718437966Z" level=info msg="Start recovering state" Jul 2 07:04:28.719123 containerd[1899]: time="2024-07-02T07:04:28.719100685Z" level=info msg="Start event monitor" Jul 2 07:04:28.719425 containerd[1899]: time="2024-07-02T07:04:28.719406993Z" level=info msg="Start snapshots syncer" Jul 2 07:04:28.719507 containerd[1899]: time="2024-07-02T07:04:28.719494310Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:04:28.719596 containerd[1899]: time="2024-07-02T07:04:28.719582689Z" level=info msg="Start streaming server" Jul 2 07:04:28.720176 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 07:04:28.720896 containerd[1899]: time="2024-07-02T07:04:28.720870651Z" level=info msg="containerd successfully booted in 0.425441s" Jul 2 07:04:28.789746 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 07:04:28.890051 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 07:04:28.990441 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO [Registrar] Starting registrar module Jul 2 07:04:29.025442 amazon-ssm-agent[1902]: 2024-07-02 07:04:27 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 07:04:29.025442 amazon-ssm-agent[1902]: 2024-07-02 07:04:28 INFO [EC2Identity] EC2 registration was successful. Jul 2 07:04:29.025442 amazon-ssm-agent[1902]: 2024-07-02 07:04:28 INFO [CredentialRefresher] credentialRefresher has started Jul 2 07:04:29.025442 amazon-ssm-agent[1902]: 2024-07-02 07:04:28 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 07:04:29.025442 amazon-ssm-agent[1902]: 2024-07-02 07:04:29 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 07:04:29.091824 amazon-ssm-agent[1902]: 2024-07-02 07:04:29 INFO [CredentialRefresher] Next credential rotation will be in 30.324990721383333 minutes Jul 2 07:04:29.334191 tar[1887]: linux-amd64/LICENSE Jul 2 07:04:29.334753 tar[1887]: linux-amd64/README.md Jul 2 07:04:29.345212 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 07:04:29.664035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:29.742537 sshd_keygen[1907]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:04:29.779260 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 07:04:29.787996 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 07:04:29.800191 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:04:29.800713 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 07:04:29.809122 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 07:04:29.825913 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 07:04:29.833055 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 07:04:29.841082 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 07:04:29.842906 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 07:04:29.844143 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 07:04:29.855139 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jul 2 07:04:29.871190 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:04:29.871651 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jul 2 07:04:30.045302 systemd[1]: Startup finished in 7.106s (kernel) + 8.007s (userspace) = 15.113s. Jul 2 07:04:30.048156 amazon-ssm-agent[1902]: 2024-07-02 07:04:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 07:04:30.149042 amazon-ssm-agent[1902]: 2024-07-02 07:04:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2122) started Jul 2 07:04:30.251228 amazon-ssm-agent[1902]: 2024-07-02 07:04:30 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 07:04:30.722985 kubelet[2097]: E0702 07:04:30.722907 2097 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:04:30.725199 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:04:30.725569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:04:35.121949 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 07:04:35.137089 systemd[1]: Started sshd@0-172.31.25.147:22-139.178.89.65:52240.service - OpenSSH per-connection server daemon (139.178.89.65:52240). Jul 2 07:04:35.322243 sshd[2134]: Accepted publickey for core from 139.178.89.65 port 52240 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:04:35.327712 sshd[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:04:35.343400 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 07:04:35.354076 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 07:04:35.360636 systemd-logind[1877]: New session 1 of user core. Jul 2 07:04:35.376466 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 07:04:35.382517 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 07:04:35.388910 (systemd)[2139]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:04:35.568364 systemd[2139]: Queued start job for default target default.target. Jul 2 07:04:35.568692 systemd[2139]: Reached target paths.target - Paths. Jul 2 07:04:35.568716 systemd[2139]: Reached target sockets.target - Sockets. Jul 2 07:04:35.568734 systemd[2139]: Reached target timers.target - Timers. Jul 2 07:04:35.568749 systemd[2139]: Reached target basic.target - Basic System. Jul 2 07:04:35.568902 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 07:04:35.573828 systemd[2139]: Reached target default.target - Main User Target. Jul 2 07:04:35.574154 systemd[2139]: Startup finished in 172ms. Jul 2 07:04:35.579343 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 07:04:35.738801 systemd[1]: Started sshd@1-172.31.25.147:22-139.178.89.65:52256.service - OpenSSH per-connection server daemon (139.178.89.65:52256). Jul 2 07:04:35.905395 sshd[2148]: Accepted publickey for core from 139.178.89.65 port 52256 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:04:35.907175 sshd[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:04:35.914860 systemd-logind[1877]: New session 2 of user core. Jul 2 07:04:35.926123 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 07:04:36.056764 sshd[2148]: pam_unix(sshd:session): session closed for user core Jul 2 07:04:36.061274 systemd[1]: sshd@1-172.31.25.147:22-139.178.89.65:52256.service: Deactivated successfully. Jul 2 07:04:36.063532 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:04:36.064802 systemd-logind[1877]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:04:36.067779 systemd-logind[1877]: Removed session 2. Jul 2 07:04:36.084386 systemd[1]: Started sshd@2-172.31.25.147:22-139.178.89.65:52272.service - OpenSSH per-connection server daemon (139.178.89.65:52272). Jul 2 07:04:36.242960 sshd[2155]: Accepted publickey for core from 139.178.89.65 port 52272 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:04:36.244598 sshd[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:04:36.250469 systemd-logind[1877]: New session 3 of user core. Jul 2 07:04:36.255939 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 07:04:36.378201 sshd[2155]: pam_unix(sshd:session): session closed for user core Jul 2 07:04:36.383607 systemd[1]: sshd@2-172.31.25.147:22-139.178.89.65:52272.service: Deactivated successfully. Jul 2 07:04:36.385902 systemd-logind[1877]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:04:36.386249 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:04:36.388533 systemd-logind[1877]: Removed session 3. Jul 2 07:04:36.416510 systemd[1]: Started sshd@3-172.31.25.147:22-139.178.89.65:52278.service - OpenSSH per-connection server daemon (139.178.89.65:52278). Jul 2 07:04:36.614229 sshd[2162]: Accepted publickey for core from 139.178.89.65 port 52278 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:04:36.615940 sshd[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:04:36.624514 systemd-logind[1877]: New session 4 of user core. Jul 2 07:04:36.629994 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 07:04:36.777971 sshd[2162]: pam_unix(sshd:session): session closed for user core Jul 2 07:04:36.781330 systemd[1]: sshd@3-172.31.25.147:22-139.178.89.65:52278.service: Deactivated successfully. Jul 2 07:04:36.782736 systemd-logind[1877]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:04:36.782939 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:04:36.785384 systemd-logind[1877]: Removed session 4. Jul 2 07:04:36.806258 systemd[1]: Started sshd@4-172.31.25.147:22-139.178.89.65:52286.service - OpenSSH per-connection server daemon (139.178.89.65:52286). Jul 2 07:04:36.960367 sshd[2169]: Accepted publickey for core from 139.178.89.65 port 52286 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:04:36.965349 sshd[2169]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:04:36.980397 systemd-logind[1877]: New session 5 of user core. Jul 2 07:04:36.988085 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 07:04:37.118187 sudo[2173]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 07:04:37.118549 sudo[2173]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:04:37.145680 sudo[2173]: pam_unix(sudo:session): session closed for user root Jul 2 07:04:37.169051 sshd[2169]: pam_unix(sshd:session): session closed for user core Jul 2 07:04:37.178453 systemd[1]: sshd@4-172.31.25.147:22-139.178.89.65:52286.service: Deactivated successfully. Jul 2 07:04:37.180202 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:04:37.181666 systemd-logind[1877]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:04:37.184803 systemd-logind[1877]: Removed session 5. Jul 2 07:04:37.196078 systemd[1]: Started sshd@5-172.31.25.147:22-139.178.89.65:52292.service - OpenSSH per-connection server daemon (139.178.89.65:52292). Jul 2 07:04:37.354483 sshd[2177]: Accepted publickey for core from 139.178.89.65 port 52292 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:04:37.356893 sshd[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:04:37.363129 systemd-logind[1877]: New session 6 of user core. Jul 2 07:04:37.369993 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 07:04:37.477824 sudo[2182]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 07:04:37.478297 sudo[2182]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:04:37.482771 sudo[2182]: pam_unix(sudo:session): session closed for user root Jul 2 07:04:37.490767 sudo[2181]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 07:04:37.491220 sudo[2181]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:04:37.516502 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 07:04:37.528166 kernel: kauditd_printk_skb: 20 callbacks suppressed Jul 2 07:04:37.528273 kernel: audit: type=1305 audit(1719903877.516:146): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 07:04:37.528304 kernel: audit: type=1300 audit(1719903877.516:146): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc1a016d0 a2=420 a3=0 items=0 ppid=1 pid=2185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:37.516000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 07:04:37.516000 audit[2185]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffc1a016d0 a2=420 a3=0 items=0 ppid=1 pid=2185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:37.528488 auditctl[2185]: No rules Jul 2 07:04:37.516000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 2 07:04:37.529887 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 07:04:37.530677 kernel: audit: type=1327 audit(1719903877.516:146): proctitle=2F7362696E2F617564697463746C002D44 Jul 2 07:04:37.531132 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 07:04:37.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:37.535586 kernel: audit: type=1131 audit(1719903877.529:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:37.542303 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 07:04:37.573893 augenrules[2203]: No rules Jul 2 07:04:37.575117 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 07:04:37.584002 kernel: audit: type=1130 audit(1719903877.574:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:37.584143 kernel: audit: type=1106 audit(1719903877.575:149): pid=2181 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:04:37.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:37.575000 audit[2181]: USER_END pid=2181 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:04:37.577208 sudo[2181]: pam_unix(sudo:session): session closed for user root Jul 2 07:04:37.575000 audit[2181]: CRED_DISP pid=2181 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:04:37.587696 kernel: audit: type=1104 audit(1719903877.575:150): pid=2181 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:04:37.601727 sshd[2177]: pam_unix(sshd:session): session closed for user core Jul 2 07:04:37.606000 audit[2177]: USER_END pid=2177 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:04:37.617688 kernel: audit: type=1106 audit(1719903877.606:151): pid=2177 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:04:37.617480 systemd[1]: sshd@5-172.31.25.147:22-139.178.89.65:52292.service: Deactivated successfully. Jul 2 07:04:37.606000 audit[2177]: CRED_DISP pid=2177 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:04:37.623511 kernel: audit: type=1104 audit(1719903877.606:152): pid=2177 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:04:37.622422 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:04:37.622812 systemd-logind[1877]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:04:37.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.25.147:22-139.178.89.65:52292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:37.630274 kernel: audit: type=1131 audit(1719903877.616:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.25.147:22-139.178.89.65:52292 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:37.629611 systemd-logind[1877]: Removed session 6. Jul 2 07:04:37.633838 systemd[1]: Started sshd@6-172.31.25.147:22-139.178.89.65:52294.service - OpenSSH per-connection server daemon (139.178.89.65:52294). Jul 2 07:04:37.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.25.147:22-139.178.89.65:52294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:37.795000 audit[2210]: USER_ACCT pid=2210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:04:37.796951 sshd[2210]: Accepted publickey for core from 139.178.89.65 port 52294 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:04:37.798000 audit[2210]: CRED_ACQ pid=2210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:04:37.798000 audit[2210]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc98c7eff0 a2=3 a3=7f060b6b0480 items=0 ppid=1 pid=2210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:37.798000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:04:37.800256 sshd[2210]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:04:37.814498 systemd-logind[1877]: New session 7 of user core. Jul 2 07:04:37.818931 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 07:04:37.839000 audit[2210]: USER_START pid=2210 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:04:37.842000 audit[2213]: CRED_ACQ pid=2213 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:04:37.941000 audit[2214]: USER_ACCT pid=2214 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:04:37.942815 sudo[2214]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:04:37.941000 audit[2214]: CRED_REFR pid=2214 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:04:37.943224 sudo[2214]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:04:37.951000 audit[2214]: USER_START pid=2214 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:04:38.218666 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 07:04:38.698246 dockerd[2223]: time="2024-07-02T07:04:38.698065813Z" level=info msg="Starting up" Jul 2 07:04:38.729135 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport726826335-merged.mount: Deactivated successfully. Jul 2 07:04:39.298719 dockerd[2223]: time="2024-07-02T07:04:39.298674648Z" level=info msg="Loading containers: start." Jul 2 07:04:39.394000 audit[2255]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2255 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.394000 audit[2255]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffbdd40bd0 a2=0 a3=7ff3e1d9de90 items=0 ppid=2223 pid=2255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.394000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 2 07:04:39.399000 audit[2257]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2257 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.399000 audit[2257]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff96075950 a2=0 a3=7f6ee70fce90 items=0 ppid=2223 pid=2257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.399000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 2 07:04:39.409000 audit[2259]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2259 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.409000 audit[2259]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdd8f97220 a2=0 a3=7f6268360e90 items=0 ppid=2223 pid=2259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.409000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 07:04:39.413000 audit[2261]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2261 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.413000 audit[2261]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc40cbe390 a2=0 a3=7f3d76874e90 items=0 ppid=2223 pid=2261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.413000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 07:04:39.418000 audit[2263]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2263 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.418000 audit[2263]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe5d5ac970 a2=0 a3=7fb2a8ff7e90 items=0 ppid=2223 pid=2263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.418000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 2 07:04:39.421000 audit[2265]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2265 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.421000 audit[2265]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc60976470 a2=0 a3=7f22b8333e90 items=0 ppid=2223 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.421000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 2 07:04:39.435000 audit[2267]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2267 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.435000 audit[2267]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc65ddc680 a2=0 a3=7fd3e1b98e90 items=0 ppid=2223 pid=2267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.435000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 2 07:04:39.440000 audit[2269]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2269 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.440000 audit[2269]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffca36a6670 a2=0 a3=7fba36cdbe90 items=0 ppid=2223 pid=2269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.440000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 2 07:04:39.445000 audit[2271]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2271 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.445000 audit[2271]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd4d0ce800 a2=0 a3=7efde4994e90 items=0 ppid=2223 pid=2271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.445000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:04:39.460000 audit[2275]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2275 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.460000 audit[2275]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc33739270 a2=0 a3=7f7b01ee0e90 items=0 ppid=2223 pid=2275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.460000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:04:39.462000 audit[2276]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2276 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.462000 audit[2276]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffa81485f0 a2=0 a3=7f7e3482ee90 items=0 ppid=2223 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.462000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:04:39.475579 kernel: Initializing XFRM netlink socket Jul 2 07:04:39.514648 (udev-worker)[2235]: Network interface NamePolicy= disabled on kernel command line. Jul 2 07:04:39.559000 audit[2284]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2284 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.559000 audit[2284]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fffba829e90 a2=0 a3=7fb9befdbe90 items=0 ppid=2223 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.559000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 2 07:04:39.674000 audit[2287]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2287 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.674000 audit[2287]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe623d4a30 a2=0 a3=7fcdbf138e90 items=0 ppid=2223 pid=2287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.674000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 2 07:04:39.680000 audit[2291]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2291 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.680000 audit[2291]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffec9526a80 a2=0 a3=7f385554ae90 items=0 ppid=2223 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.680000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 2 07:04:39.683000 audit[2293]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2293 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.683000 audit[2293]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffbabeb0c0 a2=0 a3=7ff3cd21ee90 items=0 ppid=2223 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.683000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 2 07:04:39.686000 audit[2295]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2295 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.686000 audit[2295]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc327b1140 a2=0 a3=7f47f1906e90 items=0 ppid=2223 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.686000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 2 07:04:39.689000 audit[2297]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.689000 audit[2297]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff096c7f90 a2=0 a3=7fd6476d6e90 items=0 ppid=2223 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.689000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 2 07:04:39.692000 audit[2299]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.692000 audit[2299]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffd58927920 a2=0 a3=7f3c4efa8e90 items=0 ppid=2223 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.692000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 2 07:04:39.701000 audit[2302]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2302 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.701000 audit[2302]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd51a55370 a2=0 a3=7f5e47260e90 items=0 ppid=2223 pid=2302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.701000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 2 07:04:39.704000 audit[2304]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2304 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.704000 audit[2304]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff1b8c7470 a2=0 a3=7f7f56b50e90 items=0 ppid=2223 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.704000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 07:04:39.709000 audit[2306]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2306 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.709000 audit[2306]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc4c0a6060 a2=0 a3=7fa141415e90 items=0 ppid=2223 pid=2306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.709000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 07:04:39.712000 audit[2308]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2308 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.712000 audit[2308]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd13212a00 a2=0 a3=7fecaf3fae90 items=0 ppid=2223 pid=2308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.712000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 2 07:04:39.714815 systemd-networkd[1583]: docker0: Link UP Jul 2 07:04:39.728000 audit[2312]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.728000 audit[2312]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc3596a520 a2=0 a3=7f59d2864e90 items=0 ppid=2223 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.728000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:04:39.729000 audit[2313]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2313 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:04:39.729000 audit[2313]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff5baf1900 a2=0 a3=7efc1b767e90 items=0 ppid=2223 pid=2313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:04:39.729000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 07:04:39.731807 dockerd[2223]: time="2024-07-02T07:04:39.731773102Z" level=info msg="Loading containers: done." Jul 2 07:04:39.836907 dockerd[2223]: time="2024-07-02T07:04:39.836851778Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:04:39.837172 dockerd[2223]: time="2024-07-02T07:04:39.837152439Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 07:04:39.837301 dockerd[2223]: time="2024-07-02T07:04:39.837277126Z" level=info msg="Daemon has completed initialization" Jul 2 07:04:39.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:39.929064 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 07:04:39.932752 dockerd[2223]: time="2024-07-02T07:04:39.929761389Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:04:40.938262 containerd[1899]: time="2024-07-02T07:04:40.938215311Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jul 2 07:04:40.978318 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:04:40.978668 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:40.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:40.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:40.990999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:04:41.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:41.335687 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:41.413868 kubelet[2362]: E0702 07:04:41.413816 2362 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:04:41.417948 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:04:41.418648 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:04:41.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 07:04:41.687478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount511687042.mount: Deactivated successfully. Jul 2 07:04:43.677256 containerd[1899]: time="2024-07-02T07:04:43.677203052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:43.678732 containerd[1899]: time="2024-07-02T07:04:43.678689260Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jul 2 07:04:43.680452 containerd[1899]: time="2024-07-02T07:04:43.680417311Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:43.682925 containerd[1899]: time="2024-07-02T07:04:43.682893522Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:43.685232 containerd[1899]: time="2024-07-02T07:04:43.685189795Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:43.686384 containerd[1899]: time="2024-07-02T07:04:43.686342881Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 2.748082822s" Jul 2 07:04:43.686477 containerd[1899]: time="2024-07-02T07:04:43.686394298Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jul 2 07:04:43.712481 containerd[1899]: time="2024-07-02T07:04:43.712450793Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jul 2 07:04:45.839808 containerd[1899]: time="2024-07-02T07:04:45.839751298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:45.841195 containerd[1899]: time="2024-07-02T07:04:45.841141351Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jul 2 07:04:45.842750 containerd[1899]: time="2024-07-02T07:04:45.842714546Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:45.845652 containerd[1899]: time="2024-07-02T07:04:45.845610024Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:45.848079 containerd[1899]: time="2024-07-02T07:04:45.848037333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:45.849321 containerd[1899]: time="2024-07-02T07:04:45.849278697Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.136679863s" Jul 2 07:04:45.849476 containerd[1899]: time="2024-07-02T07:04:45.849451104Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jul 2 07:04:45.876253 containerd[1899]: time="2024-07-02T07:04:45.876214109Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jul 2 07:04:47.173656 containerd[1899]: time="2024-07-02T07:04:47.173601600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:47.174883 containerd[1899]: time="2024-07-02T07:04:47.174827610Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jul 2 07:04:47.176685 containerd[1899]: time="2024-07-02T07:04:47.176652184Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:47.179079 containerd[1899]: time="2024-07-02T07:04:47.179035132Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:47.188323 containerd[1899]: time="2024-07-02T07:04:47.188252886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:47.189190 containerd[1899]: time="2024-07-02T07:04:47.189143437Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.312717958s" Jul 2 07:04:47.189300 containerd[1899]: time="2024-07-02T07:04:47.189194149Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jul 2 07:04:47.213777 containerd[1899]: time="2024-07-02T07:04:47.213727959Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 07:04:48.408777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3280522523.mount: Deactivated successfully. Jul 2 07:04:49.103992 containerd[1899]: time="2024-07-02T07:04:49.103935476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:49.105428 containerd[1899]: time="2024-07-02T07:04:49.105335747Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jul 2 07:04:49.107257 containerd[1899]: time="2024-07-02T07:04:49.107220449Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:49.109667 containerd[1899]: time="2024-07-02T07:04:49.109635862Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:49.111754 containerd[1899]: time="2024-07-02T07:04:49.111721262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:49.112517 containerd[1899]: time="2024-07-02T07:04:49.112475660Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 1.898702802s" Jul 2 07:04:49.112622 containerd[1899]: time="2024-07-02T07:04:49.112525371Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 07:04:49.139837 containerd[1899]: time="2024-07-02T07:04:49.139807091Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:04:49.610767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1808611949.mount: Deactivated successfully. Jul 2 07:04:49.621864 containerd[1899]: time="2024-07-02T07:04:49.621808229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:49.623303 containerd[1899]: time="2024-07-02T07:04:49.623261706Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 07:04:49.625570 containerd[1899]: time="2024-07-02T07:04:49.625518095Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:49.627774 containerd[1899]: time="2024-07-02T07:04:49.627708137Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:49.630669 containerd[1899]: time="2024-07-02T07:04:49.630636800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:49.631585 containerd[1899]: time="2024-07-02T07:04:49.631514769Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 491.44046ms" Jul 2 07:04:49.632061 containerd[1899]: time="2024-07-02T07:04:49.631578242Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:04:49.657953 containerd[1899]: time="2024-07-02T07:04:49.657922937Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 07:04:50.207041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount963751689.mount: Deactivated successfully. Jul 2 07:04:51.670043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:04:51.670372 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:51.676586 kernel: kauditd_printk_skb: 88 callbacks suppressed Jul 2 07:04:51.676710 kernel: audit: type=1130 audit(1719903891.668:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:51.676746 kernel: audit: type=1131 audit(1719903891.668:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:51.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:51.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:51.683442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:04:52.078019 kernel: audit: type=1130 audit(1719903892.072:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:52.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:52.073862 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:52.199460 kubelet[2516]: E0702 07:04:52.199410 2516 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:04:52.202365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:04:52.206575 kernel: audit: type=1131 audit(1719903892.201:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 07:04:52.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 07:04:52.202596 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:04:53.255379 containerd[1899]: time="2024-07-02T07:04:53.255326257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:53.255980 containerd[1899]: time="2024-07-02T07:04:53.255932051Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jul 2 07:04:53.257527 containerd[1899]: time="2024-07-02T07:04:53.257485449Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:53.258759 containerd[1899]: time="2024-07-02T07:04:53.258725254Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:53.261277 containerd[1899]: time="2024-07-02T07:04:53.261244777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:53.262463 containerd[1899]: time="2024-07-02T07:04:53.262423752Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.604260274s" Jul 2 07:04:53.262616 containerd[1899]: time="2024-07-02T07:04:53.262591858Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 07:04:53.291289 containerd[1899]: time="2024-07-02T07:04:53.291242729Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jul 2 07:04:54.101393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272775088.mount: Deactivated successfully. Jul 2 07:04:54.844078 containerd[1899]: time="2024-07-02T07:04:54.844022640Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:54.845580 containerd[1899]: time="2024-07-02T07:04:54.845511906Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jul 2 07:04:54.847285 containerd[1899]: time="2024-07-02T07:04:54.847251514Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:54.849536 containerd[1899]: time="2024-07-02T07:04:54.849503996Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:54.851905 containerd[1899]: time="2024-07-02T07:04:54.851861840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:54.852845 containerd[1899]: time="2024-07-02T07:04:54.852809631Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.561512311s" Jul 2 07:04:54.852973 containerd[1899]: time="2024-07-02T07:04:54.852949639Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jul 2 07:04:58.050304 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 07:04:58.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:58.055573 kernel: audit: type=1131 audit(1719903898.049:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:58.834464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:58.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:58.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:58.840077 kernel: audit: type=1130 audit(1719903898.833:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:58.840183 kernel: audit: type=1131 audit(1719903898.833:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:58.846428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:04:58.877464 systemd[1]: Reloading. Jul 2 07:04:59.179499 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:04:59.355434 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 07:04:59.355574 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 07:04:59.356261 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:59.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 07:04:59.364603 kernel: audit: type=1130 audit(1719903899.354:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 07:04:59.367483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:04:59.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:59.664973 kernel: audit: type=1130 audit(1719903899.659:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:04:59.660811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:59.753049 kubelet[2693]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:04:59.753615 kubelet[2693]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:04:59.753719 kubelet[2693]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:04:59.756307 kubelet[2693]: I0702 07:04:59.756011 2693 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:05:00.254410 kubelet[2693]: I0702 07:05:00.253919 2693 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:05:00.254410 kubelet[2693]: I0702 07:05:00.254415 2693 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:05:00.255260 kubelet[2693]: I0702 07:05:00.255235 2693 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:05:00.286314 kubelet[2693]: I0702 07:05:00.286273 2693 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:05:00.288217 kubelet[2693]: E0702 07:05:00.288181 2693 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.25.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:00.309612 kubelet[2693]: I0702 07:05:00.309568 2693 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:05:00.310067 kubelet[2693]: I0702 07:05:00.310040 2693 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:05:00.310337 kubelet[2693]: I0702 07:05:00.310311 2693 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:05:00.311202 kubelet[2693]: I0702 07:05:00.311170 2693 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:05:00.311436 kubelet[2693]: I0702 07:05:00.311233 2693 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:05:00.313528 kubelet[2693]: I0702 07:05:00.313498 2693 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:05:00.315620 kubelet[2693]: I0702 07:05:00.314958 2693 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:05:00.315971 kubelet[2693]: I0702 07:05:00.315639 2693 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:05:00.315971 kubelet[2693]: I0702 07:05:00.315717 2693 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:05:00.315971 kubelet[2693]: I0702 07:05:00.315962 2693 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:05:00.319501 kubelet[2693]: W0702 07:05:00.318400 2693 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.25.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-147&limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:00.319501 kubelet[2693]: E0702 07:05:00.318470 2693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.25.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-147&limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:00.321421 kubelet[2693]: I0702 07:05:00.321227 2693 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 07:05:00.326100 kubelet[2693]: W0702 07:05:00.326072 2693 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:05:00.327092 kubelet[2693]: I0702 07:05:00.327070 2693 server.go:1232] "Started kubelet" Jul 2 07:05:00.327353 kubelet[2693]: W0702 07:05:00.327333 2693 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.25.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:00.327458 kubelet[2693]: E0702 07:05:00.327448 2693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.25.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:00.330537 kubelet[2693]: I0702 07:05:00.329982 2693 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:05:00.331046 kubelet[2693]: I0702 07:05:00.331030 2693 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:05:00.331807 kubelet[2693]: I0702 07:05:00.331768 2693 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:05:00.332044 kubelet[2693]: I0702 07:05:00.332030 2693 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:05:00.332535 kubelet[2693]: E0702 07:05:00.332431 2693 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-25-147.17de537c22cbaf64", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-25-147", UID:"ip-172-31-25-147", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-25-147"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 5, 0, 326932324, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 5, 0, 326932324, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-25-147"}': 'Post "https://172.31.25.147:6443/api/v1/namespaces/default/events": dial tcp 172.31.25.147:6443: connect: connection refused'(may retry after sleeping) Jul 2 07:05:00.335441 kubelet[2693]: I0702 07:05:00.335424 2693 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:05:00.335793 kubelet[2693]: E0702 07:05:00.335774 2693 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:05:00.335871 kubelet[2693]: E0702 07:05:00.335807 2693 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:05:00.349592 kernel: audit: type=1325 audit(1719903900.339:201): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2703 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:00.349789 kernel: audit: type=1300 audit(1719903900.339:201): arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd92e6e370 a2=0 a3=7f55f44a6e90 items=0 ppid=2693 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.349831 kernel: audit: type=1327 audit(1719903900.339:201): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 07:05:00.339000 audit[2703]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2703 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:00.339000 audit[2703]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd92e6e370 a2=0 a3=7f55f44a6e90 items=0 ppid=2693 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 07:05:00.350321 kubelet[2693]: I0702 07:05:00.346468 2693 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:05:00.339000 audit[2704]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:00.353834 kernel: audit: type=1325 audit(1719903900.339:202): table=filter:27 family=2 entries=1 op=nft_register_chain pid=2704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:00.353955 kernel: audit: type=1300 audit(1719903900.339:202): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0137dde0 a2=0 a3=7fd181aabe90 items=0 ppid=2693 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.339000 audit[2704]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd0137dde0 a2=0 a3=7fd181aabe90 items=0 ppid=2693 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.354222 kubelet[2693]: I0702 07:05:00.353780 2693 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:05:00.354523 kubelet[2693]: I0702 07:05:00.354480 2693 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:05:00.358119 kubelet[2693]: E0702 07:05:00.357969 2693 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-147?timeout=10s\": dial tcp 172.31.25.147:6443: connect: connection refused" interval="200ms" Jul 2 07:05:00.359167 kubelet[2693]: W0702 07:05:00.359118 2693 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.25.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:00.359306 kubelet[2693]: E0702 07:05:00.359295 2693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.25.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:00.339000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 07:05:00.343000 audit[2706]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2706 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:00.343000 audit[2706]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc6cc612d0 a2=0 a3=7f560cff5e90 items=0 ppid=2693 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.343000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 07:05:00.360000 audit[2708]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:00.360000 audit[2708]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff001d6b90 a2=0 a3=7f2cf8e08e90 items=0 ppid=2693 pid=2708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 07:05:00.401000 audit[2714]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:00.401000 audit[2714]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffde26d7b50 a2=0 a3=7f8c7a730e90 items=0 ppid=2693 pid=2714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.401000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 2 07:05:00.404193 kubelet[2693]: I0702 07:05:00.404170 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:05:00.404000 audit[2717]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=2717 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:00.404000 audit[2717]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd17c9b750 a2=0 a3=7fdad9e0ae90 items=0 ppid=2693 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 07:05:00.406000 audit[2716]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=2716 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:00.406000 audit[2716]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff282d7d00 a2=0 a3=7f7ec613ce90 items=0 ppid=2693 pid=2716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.406000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 07:05:00.410921 kubelet[2693]: I0702 07:05:00.410889 2693 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:05:00.410921 kubelet[2693]: I0702 07:05:00.410927 2693 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:05:00.411105 kubelet[2693]: I0702 07:05:00.410979 2693 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:05:00.411105 kubelet[2693]: E0702 07:05:00.411053 2693 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:05:00.412953 kubelet[2693]: W0702 07:05:00.412924 2693 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.25.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:00.413000 audit[2719]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2719 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:00.413000 audit[2719]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe03121280 a2=0 a3=7f86f3621e90 items=0 ppid=2693 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.413000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 07:05:00.415411 kubelet[2693]: E0702 07:05:00.415373 2693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.25.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:00.415000 audit[2720]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2720 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:00.415000 audit[2720]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd16fbf230 a2=0 a3=7f4454732e90 items=0 ppid=2693 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 07:05:00.417000 audit[2721]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=2721 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:00.417000 audit[2721]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffed0eeb0f0 a2=0 a3=7f3c4a1f7e90 items=0 ppid=2693 pid=2721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.417000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 07:05:00.421000 audit[2722]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=2722 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:00.421000 audit[2722]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd24b882d0 a2=0 a3=7effe1ebce90 items=0 ppid=2693 pid=2722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.421000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 07:05:00.422000 audit[2723]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2723 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:00.422000 audit[2723]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe18cdc6a0 a2=0 a3=7f74dc6d2e90 items=0 ppid=2693 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:00.422000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 07:05:00.453530 kubelet[2693]: I0702 07:05:00.453496 2693 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-25-147" Jul 2 07:05:00.454831 kubelet[2693]: E0702 07:05:00.454418 2693 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.25.147:6443/api/v1/nodes\": dial tcp 172.31.25.147:6443: connect: connection refused" node="ip-172-31-25-147" Jul 2 07:05:00.463741 kubelet[2693]: I0702 07:05:00.463713 2693 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:05:00.463741 kubelet[2693]: I0702 07:05:00.463734 2693 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:05:00.463741 kubelet[2693]: I0702 07:05:00.463752 2693 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:05:00.466761 kubelet[2693]: I0702 07:05:00.466733 2693 policy_none.go:49] "None policy: Start" Jul 2 07:05:00.467594 kubelet[2693]: I0702 07:05:00.467548 2693 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:05:00.467683 kubelet[2693]: I0702 07:05:00.467604 2693 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:05:00.476893 kubelet[2693]: I0702 07:05:00.476858 2693 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:05:00.477179 kubelet[2693]: I0702 07:05:00.477156 2693 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:05:00.482448 kubelet[2693]: E0702 07:05:00.482419 2693 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-25-147\" not found" Jul 2 07:05:00.512540 kubelet[2693]: I0702 07:05:00.512055 2693 topology_manager.go:215] "Topology Admit Handler" podUID="fc207e46163833647bff95829a4c0603" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-25-147" Jul 2 07:05:00.516448 kubelet[2693]: I0702 07:05:00.516416 2693 topology_manager.go:215] "Topology Admit Handler" podUID="5671cc26cad79dadff608a724449f256" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-25-147" Jul 2 07:05:00.520719 kubelet[2693]: I0702 07:05:00.520690 2693 topology_manager.go:215] "Topology Admit Handler" podUID="1dbccc02d961f0c1241d0436bc584d57" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-25-147" Jul 2 07:05:00.555248 kubelet[2693]: I0702 07:05:00.555213 2693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5671cc26cad79dadff608a724449f256-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-147\" (UID: \"5671cc26cad79dadff608a724449f256\") " pod="kube-system/kube-apiserver-ip-172-31-25-147" Jul 2 07:05:00.555499 kubelet[2693]: I0702 07:05:00.555293 2693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dbccc02d961f0c1241d0436bc584d57-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-147\" (UID: \"1dbccc02d961f0c1241d0436bc584d57\") " pod="kube-system/kube-controller-manager-ip-172-31-25-147" Jul 2 07:05:00.555499 kubelet[2693]: I0702 07:05:00.555402 2693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1dbccc02d961f0c1241d0436bc584d57-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-147\" (UID: \"1dbccc02d961f0c1241d0436bc584d57\") " pod="kube-system/kube-controller-manager-ip-172-31-25-147" Jul 2 07:05:00.555499 kubelet[2693]: I0702 07:05:00.555451 2693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc207e46163833647bff95829a4c0603-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-147\" (UID: \"fc207e46163833647bff95829a4c0603\") " pod="kube-system/kube-scheduler-ip-172-31-25-147" Jul 2 07:05:00.555499 kubelet[2693]: I0702 07:05:00.555484 2693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5671cc26cad79dadff608a724449f256-ca-certs\") pod \"kube-apiserver-ip-172-31-25-147\" (UID: \"5671cc26cad79dadff608a724449f256\") " pod="kube-system/kube-apiserver-ip-172-31-25-147" Jul 2 07:05:00.555803 kubelet[2693]: I0702 07:05:00.555512 2693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5671cc26cad79dadff608a724449f256-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-147\" (UID: \"5671cc26cad79dadff608a724449f256\") " pod="kube-system/kube-apiserver-ip-172-31-25-147" Jul 2 07:05:00.555803 kubelet[2693]: I0702 07:05:00.555714 2693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dbccc02d961f0c1241d0436bc584d57-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-147\" (UID: \"1dbccc02d961f0c1241d0436bc584d57\") " pod="kube-system/kube-controller-manager-ip-172-31-25-147" Jul 2 07:05:00.555803 kubelet[2693]: I0702 07:05:00.555761 2693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1dbccc02d961f0c1241d0436bc584d57-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-147\" (UID: \"1dbccc02d961f0c1241d0436bc584d57\") " pod="kube-system/kube-controller-manager-ip-172-31-25-147" Jul 2 07:05:00.556160 kubelet[2693]: I0702 07:05:00.555810 2693 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dbccc02d961f0c1241d0436bc584d57-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-147\" (UID: \"1dbccc02d961f0c1241d0436bc584d57\") " pod="kube-system/kube-controller-manager-ip-172-31-25-147" Jul 2 07:05:00.560221 kubelet[2693]: E0702 07:05:00.560191 2693 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-147?timeout=10s\": dial tcp 172.31.25.147:6443: connect: connection refused" interval="400ms" Jul 2 07:05:00.656884 kubelet[2693]: I0702 07:05:00.656850 2693 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-25-147" Jul 2 07:05:00.657502 kubelet[2693]: E0702 07:05:00.657477 2693 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.25.147:6443/api/v1/nodes\": dial tcp 172.31.25.147:6443: connect: connection refused" node="ip-172-31-25-147" Jul 2 07:05:00.822499 containerd[1899]: time="2024-07-02T07:05:00.822127803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-147,Uid:fc207e46163833647bff95829a4c0603,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:00.835769 containerd[1899]: time="2024-07-02T07:05:00.835725044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-147,Uid:1dbccc02d961f0c1241d0436bc584d57,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:00.836207 containerd[1899]: time="2024-07-02T07:05:00.835725081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-147,Uid:5671cc26cad79dadff608a724449f256,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:00.961948 kubelet[2693]: E0702 07:05:00.961913 2693 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-147?timeout=10s\": dial tcp 172.31.25.147:6443: connect: connection refused" interval="800ms" Jul 2 07:05:01.059926 kubelet[2693]: I0702 07:05:01.059897 2693 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-25-147" Jul 2 07:05:01.060393 kubelet[2693]: E0702 07:05:01.060369 2693 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.25.147:6443/api/v1/nodes\": dial tcp 172.31.25.147:6443: connect: connection refused" node="ip-172-31-25-147" Jul 2 07:05:01.334693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1800652029.mount: Deactivated successfully. Jul 2 07:05:01.352351 containerd[1899]: time="2024-07-02T07:05:01.352297853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:01.356762 containerd[1899]: time="2024-07-02T07:05:01.356694291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 07:05:01.358276 containerd[1899]: time="2024-07-02T07:05:01.358230508Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:01.361258 containerd[1899]: time="2024-07-02T07:05:01.361190525Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 07:05:01.363698 containerd[1899]: time="2024-07-02T07:05:01.363654631Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:01.365506 containerd[1899]: time="2024-07-02T07:05:01.365449163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 07:05:01.373939 kubelet[2693]: W0702 07:05:01.373869 2693 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.25.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:01.373939 kubelet[2693]: E0702 07:05:01.373950 2693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.25.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:01.378504 containerd[1899]: time="2024-07-02T07:05:01.378450091Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:01.382150 containerd[1899]: time="2024-07-02T07:05:01.382099063Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:01.385391 containerd[1899]: time="2024-07-02T07:05:01.384094329Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:01.399816 containerd[1899]: time="2024-07-02T07:05:01.399765336Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:01.403082 containerd[1899]: time="2024-07-02T07:05:01.403017789Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 580.768387ms" Jul 2 07:05:01.405801 containerd[1899]: time="2024-07-02T07:05:01.405668807Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 569.54159ms" Jul 2 07:05:01.407003 containerd[1899]: time="2024-07-02T07:05:01.406920358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 570.661308ms" Jul 2 07:05:01.407496 containerd[1899]: time="2024-07-02T07:05:01.407462991Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:01.408296 containerd[1899]: time="2024-07-02T07:05:01.408258384Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:01.409361 containerd[1899]: time="2024-07-02T07:05:01.409220223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:01.410649 containerd[1899]: time="2024-07-02T07:05:01.410582276Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:01.412873 containerd[1899]: time="2024-07-02T07:05:01.412802188Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:01.466229 kubelet[2693]: W0702 07:05:01.463340 2693 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.25.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-147&limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:01.466229 kubelet[2693]: E0702 07:05:01.463519 2693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.25.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-25-147&limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:01.570590 kubelet[2693]: W0702 07:05:01.568046 2693 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.25.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:01.570590 kubelet[2693]: E0702 07:05:01.568140 2693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.25.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:01.573907 kubelet[2693]: W0702 07:05:01.573710 2693 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.25.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:01.573907 kubelet[2693]: E0702 07:05:01.573857 2693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.25.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:01.576627 kubelet[2693]: E0702 07:05:01.576486 2693 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-25-147.17de537c22cbaf64", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-25-147", UID:"ip-172-31-25-147", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-25-147"}, FirstTimestamp:time.Date(2024, time.July, 2, 7, 5, 0, 326932324, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 7, 5, 0, 326932324, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-25-147"}': 'Post "https://172.31.25.147:6443/api/v1/namespaces/default/events": dial tcp 172.31.25.147:6443: connect: connection refused'(may retry after sleeping) Jul 2 07:05:01.766649 kubelet[2693]: E0702 07:05:01.764262 2693 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-147?timeout=10s\": dial tcp 172.31.25.147:6443: connect: connection refused" interval="1.6s" Jul 2 07:05:01.776511 containerd[1899]: time="2024-07-02T07:05:01.776125951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:01.776511 containerd[1899]: time="2024-07-02T07:05:01.776202791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:01.776511 containerd[1899]: time="2024-07-02T07:05:01.776230660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:01.776511 containerd[1899]: time="2024-07-02T07:05:01.776255235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:01.777057 containerd[1899]: time="2024-07-02T07:05:01.776691402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:01.777057 containerd[1899]: time="2024-07-02T07:05:01.776752425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:01.777057 containerd[1899]: time="2024-07-02T07:05:01.776773673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:01.777057 containerd[1899]: time="2024-07-02T07:05:01.776787243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:01.786160 containerd[1899]: time="2024-07-02T07:05:01.785807526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:01.786160 containerd[1899]: time="2024-07-02T07:05:01.785887592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:01.786160 containerd[1899]: time="2024-07-02T07:05:01.785913309Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:01.786160 containerd[1899]: time="2024-07-02T07:05:01.785931386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:01.912742 kubelet[2693]: I0702 07:05:01.912705 2693 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-25-147" Jul 2 07:05:01.915249 kubelet[2693]: E0702 07:05:01.913604 2693 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.25.147:6443/api/v1/nodes\": dial tcp 172.31.25.147:6443: connect: connection refused" node="ip-172-31-25-147" Jul 2 07:05:02.134456 containerd[1899]: time="2024-07-02T07:05:02.133651249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-25-147,Uid:1dbccc02d961f0c1241d0436bc584d57,Namespace:kube-system,Attempt:0,} returns sandbox id \"82fcf12a0ffc9d6b8ccd462c205220d1aa544d89ba4dfe6cdce34a89df7b4d99\"" Jul 2 07:05:02.154949 containerd[1899]: time="2024-07-02T07:05:02.154903388Z" level=info msg="CreateContainer within sandbox \"82fcf12a0ffc9d6b8ccd462c205220d1aa544d89ba4dfe6cdce34a89df7b4d99\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:05:02.238095 containerd[1899]: time="2024-07-02T07:05:02.238051209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-25-147,Uid:5671cc26cad79dadff608a724449f256,Namespace:kube-system,Attempt:0,} returns sandbox id \"816cc2837f35d98c1770fd88e59910b82bb7dce6715c034f2732dd58c7a4e4e7\"" Jul 2 07:05:02.247784 containerd[1899]: time="2024-07-02T07:05:02.247730581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-25-147,Uid:fc207e46163833647bff95829a4c0603,Namespace:kube-system,Attempt:0,} returns sandbox id \"6070298d9ab1ba4985f3d4ec5d46cbe1f54e885c611a57f241922b7a1f45adb9\"" Jul 2 07:05:02.266019 containerd[1899]: time="2024-07-02T07:05:02.265959447Z" level=info msg="CreateContainer within sandbox \"816cc2837f35d98c1770fd88e59910b82bb7dce6715c034f2732dd58c7a4e4e7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:05:02.266328 containerd[1899]: time="2024-07-02T07:05:02.265957897Z" level=info msg="CreateContainer within sandbox \"6070298d9ab1ba4985f3d4ec5d46cbe1f54e885c611a57f241922b7a1f45adb9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:05:02.279843 containerd[1899]: time="2024-07-02T07:05:02.279779355Z" level=info msg="CreateContainer within sandbox \"82fcf12a0ffc9d6b8ccd462c205220d1aa544d89ba4dfe6cdce34a89df7b4d99\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"24237807b590d8f99ffde70f9c8cf21e1e3694ae51cf79c0d2e208c2ae01d8c6\"" Jul 2 07:05:02.283201 containerd[1899]: time="2024-07-02T07:05:02.283151222Z" level=info msg="StartContainer for \"24237807b590d8f99ffde70f9c8cf21e1e3694ae51cf79c0d2e208c2ae01d8c6\"" Jul 2 07:05:02.330580 kubelet[2693]: E0702 07:05:02.330451 2693 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.25.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:02.332461 containerd[1899]: time="2024-07-02T07:05:02.332402988Z" level=info msg="CreateContainer within sandbox \"6070298d9ab1ba4985f3d4ec5d46cbe1f54e885c611a57f241922b7a1f45adb9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0d7d725e904c8ff40d90b7c2bea8c011102cb6a20d3f12ace6da97543c1a802a\"" Jul 2 07:05:02.333362 containerd[1899]: time="2024-07-02T07:05:02.333316124Z" level=info msg="StartContainer for \"0d7d725e904c8ff40d90b7c2bea8c011102cb6a20d3f12ace6da97543c1a802a\"" Jul 2 07:05:02.344463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2375351923.mount: Deactivated successfully. Jul 2 07:05:02.356983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount271029464.mount: Deactivated successfully. Jul 2 07:05:02.368033 containerd[1899]: time="2024-07-02T07:05:02.367931968Z" level=info msg="CreateContainer within sandbox \"816cc2837f35d98c1770fd88e59910b82bb7dce6715c034f2732dd58c7a4e4e7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"27b5c00676ae7e4b3adb044f803963c6d021c6c4107bb3debf614a05b4aaee5a\"" Jul 2 07:05:02.370276 containerd[1899]: time="2024-07-02T07:05:02.370224812Z" level=info msg="StartContainer for \"27b5c00676ae7e4b3adb044f803963c6d021c6c4107bb3debf614a05b4aaee5a\"" Jul 2 07:05:02.521477 containerd[1899]: time="2024-07-02T07:05:02.520540805Z" level=info msg="StartContainer for \"24237807b590d8f99ffde70f9c8cf21e1e3694ae51cf79c0d2e208c2ae01d8c6\" returns successfully" Jul 2 07:05:02.545491 containerd[1899]: time="2024-07-02T07:05:02.545445896Z" level=info msg="StartContainer for \"0d7d725e904c8ff40d90b7c2bea8c011102cb6a20d3f12ace6da97543c1a802a\" returns successfully" Jul 2 07:05:02.607873 containerd[1899]: time="2024-07-02T07:05:02.607816447Z" level=info msg="StartContainer for \"27b5c00676ae7e4b3adb044f803963c6d021c6c4107bb3debf614a05b4aaee5a\" returns successfully" Jul 2 07:05:03.365035 kubelet[2693]: E0702 07:05:03.365006 2693 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.25.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-147?timeout=10s\": dial tcp 172.31.25.147:6443: connect: connection refused" interval="3.2s" Jul 2 07:05:03.373183 kubelet[2693]: W0702 07:05:03.373122 2693 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.25.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:03.373376 kubelet[2693]: E0702 07:05:03.373366 2693 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.25.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.25.147:6443: connect: connection refused Jul 2 07:05:03.518305 kubelet[2693]: I0702 07:05:03.518280 2693 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-25-147" Jul 2 07:05:03.519256 kubelet[2693]: E0702 07:05:03.519234 2693 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.25.147:6443/api/v1/nodes\": dial tcp 172.31.25.147:6443: connect: connection refused" node="ip-172-31-25-147" Jul 2 07:05:06.147627 kubelet[2693]: E0702 07:05:06.147591 2693 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-25-147" not found Jul 2 07:05:06.332797 kubelet[2693]: I0702 07:05:06.332753 2693 apiserver.go:52] "Watching apiserver" Jul 2 07:05:06.355207 kubelet[2693]: I0702 07:05:06.355160 2693 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:05:06.529206 kubelet[2693]: E0702 07:05:06.528065 2693 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-25-147" not found Jul 2 07:05:06.574544 kubelet[2693]: E0702 07:05:06.574510 2693 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-25-147\" not found" node="ip-172-31-25-147" Jul 2 07:05:06.721695 kubelet[2693]: I0702 07:05:06.721671 2693 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-25-147" Jul 2 07:05:06.729230 kubelet[2693]: I0702 07:05:06.729194 2693 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-25-147" Jul 2 07:05:08.825750 systemd[1]: Reloading. Jul 2 07:05:09.173211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:05:09.331718 kubelet[2693]: I0702 07:05:09.331681 2693 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:05:09.331991 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:05:09.349017 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:05:09.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:05:09.349528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:05:09.352332 kernel: kauditd_printk_skb: 31 callbacks suppressed Jul 2 07:05:09.352438 kernel: audit: type=1131 audit(1719903909.348:213): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:05:09.360190 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:05:09.620006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:05:09.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:05:09.624586 kernel: audit: type=1130 audit(1719903909.619:214): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:05:09.729288 kubelet[3060]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:05:09.729770 kubelet[3060]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:05:09.729844 kubelet[3060]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:05:09.730016 kubelet[3060]: I0702 07:05:09.729977 3060 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:05:09.742997 kubelet[3060]: I0702 07:05:09.742487 3060 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 07:05:09.742997 kubelet[3060]: I0702 07:05:09.742529 3060 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:05:09.742997 kubelet[3060]: I0702 07:05:09.742822 3060 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 07:05:09.753463 kubelet[3060]: I0702 07:05:09.753431 3060 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:05:09.758132 kubelet[3060]: I0702 07:05:09.758090 3060 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:05:09.777283 kubelet[3060]: I0702 07:05:09.775465 3060 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:05:09.777283 kubelet[3060]: I0702 07:05:09.776170 3060 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:05:09.777283 kubelet[3060]: I0702 07:05:09.776407 3060 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:05:09.777283 kubelet[3060]: I0702 07:05:09.776431 3060 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:05:09.777283 kubelet[3060]: I0702 07:05:09.776446 3060 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:05:09.777283 kubelet[3060]: I0702 07:05:09.776496 3060 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:05:09.777756 kubelet[3060]: I0702 07:05:09.776627 3060 kubelet.go:393] "Attempting to sync node with API server" Jul 2 07:05:09.777756 kubelet[3060]: I0702 07:05:09.776645 3060 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:05:09.777756 kubelet[3060]: I0702 07:05:09.776672 3060 kubelet.go:309] "Adding apiserver pod source" Jul 2 07:05:09.777756 kubelet[3060]: I0702 07:05:09.776690 3060 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:05:09.797067 kubelet[3060]: I0702 07:05:09.796623 3060 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 07:05:09.797296 kubelet[3060]: I0702 07:05:09.797278 3060 server.go:1232] "Started kubelet" Jul 2 07:05:09.802701 kubelet[3060]: I0702 07:05:09.802678 3060 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:05:09.804738 kubelet[3060]: I0702 07:05:09.804708 3060 server.go:462] "Adding debug handlers to kubelet server" Jul 2 07:05:09.806995 kubelet[3060]: I0702 07:05:09.806978 3060 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 07:05:09.810242 kubelet[3060]: I0702 07:05:09.807295 3060 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:05:09.810464 kubelet[3060]: E0702 07:05:09.810423 3060 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 07:05:09.810464 kubelet[3060]: E0702 07:05:09.810454 3060 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:05:09.814886 kubelet[3060]: I0702 07:05:09.814684 3060 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:05:09.819260 kubelet[3060]: I0702 07:05:09.818950 3060 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:05:09.819260 kubelet[3060]: I0702 07:05:09.819084 3060 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:05:09.819260 kubelet[3060]: I0702 07:05:09.819235 3060 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:05:09.837017 kubelet[3060]: I0702 07:05:09.836747 3060 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:05:09.841945 kubelet[3060]: I0702 07:05:09.840775 3060 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:05:09.841945 kubelet[3060]: I0702 07:05:09.840797 3060 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:05:09.841945 kubelet[3060]: I0702 07:05:09.840816 3060 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 07:05:09.841945 kubelet[3060]: E0702 07:05:09.840873 3060 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:05:09.942580 kubelet[3060]: E0702 07:05:09.941363 3060 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:05:09.958010 kubelet[3060]: I0702 07:05:09.957425 3060 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:05:09.958010 kubelet[3060]: I0702 07:05:09.957449 3060 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:05:09.958010 kubelet[3060]: I0702 07:05:09.957469 3060 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:05:09.958010 kubelet[3060]: I0702 07:05:09.957754 3060 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:05:09.958010 kubelet[3060]: I0702 07:05:09.957789 3060 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:05:09.958010 kubelet[3060]: I0702 07:05:09.957799 3060 policy_none.go:49] "None policy: Start" Jul 2 07:05:09.958965 kubelet[3060]: I0702 07:05:09.958945 3060 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 07:05:09.959087 kubelet[3060]: I0702 07:05:09.958976 3060 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:05:09.959711 kubelet[3060]: I0702 07:05:09.959693 3060 state_mem.go:75] "Updated machine memory state" Jul 2 07:05:09.961606 kubelet[3060]: I0702 07:05:09.961589 3060 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:05:09.965612 kubelet[3060]: I0702 07:05:09.965597 3060 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:05:10.088192 kubelet[3060]: I0702 07:05:10.088165 3060 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-25-147" Jul 2 07:05:10.102767 kubelet[3060]: I0702 07:05:10.102729 3060 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-25-147" Jul 2 07:05:10.102922 kubelet[3060]: I0702 07:05:10.102830 3060 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-25-147" Jul 2 07:05:10.142317 kubelet[3060]: I0702 07:05:10.141845 3060 topology_manager.go:215] "Topology Admit Handler" podUID="5671cc26cad79dadff608a724449f256" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-25-147" Jul 2 07:05:10.142317 kubelet[3060]: I0702 07:05:10.142052 3060 topology_manager.go:215] "Topology Admit Handler" podUID="1dbccc02d961f0c1241d0436bc584d57" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-25-147" Jul 2 07:05:10.142317 kubelet[3060]: I0702 07:05:10.142222 3060 topology_manager.go:215] "Topology Admit Handler" podUID="fc207e46163833647bff95829a4c0603" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-25-147" Jul 2 07:05:10.149653 kubelet[3060]: E0702 07:05:10.149623 3060 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-25-147\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-147" Jul 2 07:05:10.222870 kubelet[3060]: I0702 07:05:10.221660 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1dbccc02d961f0c1241d0436bc584d57-kubeconfig\") pod \"kube-controller-manager-ip-172-31-25-147\" (UID: \"1dbccc02d961f0c1241d0436bc584d57\") " pod="kube-system/kube-controller-manager-ip-172-31-25-147" Jul 2 07:05:10.222870 kubelet[3060]: I0702 07:05:10.221722 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1dbccc02d961f0c1241d0436bc584d57-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-25-147\" (UID: \"1dbccc02d961f0c1241d0436bc584d57\") " pod="kube-system/kube-controller-manager-ip-172-31-25-147" Jul 2 07:05:10.222870 kubelet[3060]: I0702 07:05:10.221757 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5671cc26cad79dadff608a724449f256-k8s-certs\") pod \"kube-apiserver-ip-172-31-25-147\" (UID: \"5671cc26cad79dadff608a724449f256\") " pod="kube-system/kube-apiserver-ip-172-31-25-147" Jul 2 07:05:10.222870 kubelet[3060]: I0702 07:05:10.221796 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5671cc26cad79dadff608a724449f256-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-25-147\" (UID: \"5671cc26cad79dadff608a724449f256\") " pod="kube-system/kube-apiserver-ip-172-31-25-147" Jul 2 07:05:10.222870 kubelet[3060]: I0702 07:05:10.221833 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1dbccc02d961f0c1241d0436bc584d57-ca-certs\") pod \"kube-controller-manager-ip-172-31-25-147\" (UID: \"1dbccc02d961f0c1241d0436bc584d57\") " pod="kube-system/kube-controller-manager-ip-172-31-25-147" Jul 2 07:05:10.223217 kubelet[3060]: I0702 07:05:10.221865 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1dbccc02d961f0c1241d0436bc584d57-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-25-147\" (UID: \"1dbccc02d961f0c1241d0436bc584d57\") " pod="kube-system/kube-controller-manager-ip-172-31-25-147" Jul 2 07:05:10.223217 kubelet[3060]: I0702 07:05:10.222061 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5671cc26cad79dadff608a724449f256-ca-certs\") pod \"kube-apiserver-ip-172-31-25-147\" (UID: \"5671cc26cad79dadff608a724449f256\") " pod="kube-system/kube-apiserver-ip-172-31-25-147" Jul 2 07:05:10.223217 kubelet[3060]: I0702 07:05:10.222370 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1dbccc02d961f0c1241d0436bc584d57-k8s-certs\") pod \"kube-controller-manager-ip-172-31-25-147\" (UID: \"1dbccc02d961f0c1241d0436bc584d57\") " pod="kube-system/kube-controller-manager-ip-172-31-25-147" Jul 2 07:05:10.223217 kubelet[3060]: I0702 07:05:10.222410 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fc207e46163833647bff95829a4c0603-kubeconfig\") pod \"kube-scheduler-ip-172-31-25-147\" (UID: \"fc207e46163833647bff95829a4c0603\") " pod="kube-system/kube-scheduler-ip-172-31-25-147" Jul 2 07:05:10.781664 kubelet[3060]: I0702 07:05:10.781524 3060 apiserver.go:52] "Watching apiserver" Jul 2 07:05:10.820293 kubelet[3060]: I0702 07:05:10.820251 3060 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:05:10.898813 kubelet[3060]: E0702 07:05:10.898780 3060 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-25-147\" already exists" pod="kube-system/kube-apiserver-ip-172-31-25-147" Jul 2 07:05:10.924492 kubelet[3060]: I0702 07:05:10.924453 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-25-147" podStartSLOduration=1.922981793 podCreationTimestamp="2024-07-02 07:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:05:10.920357557 +0000 UTC m=+1.285431136" watchObservedRunningTime="2024-07-02 07:05:10.922981793 +0000 UTC m=+1.288055367" Jul 2 07:05:10.952658 kubelet[3060]: I0702 07:05:10.952499 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-25-147" podStartSLOduration=0.952450155 podCreationTimestamp="2024-07-02 07:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:05:10.939481131 +0000 UTC m=+1.304554712" watchObservedRunningTime="2024-07-02 07:05:10.952450155 +0000 UTC m=+1.317523735" Jul 2 07:05:10.974173 kubelet[3060]: I0702 07:05:10.974132 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-25-147" podStartSLOduration=0.974084418 podCreationTimestamp="2024-07-02 07:05:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:05:10.953921111 +0000 UTC m=+1.318994692" watchObservedRunningTime="2024-07-02 07:05:10.974084418 +0000 UTC m=+1.339157999" Jul 2 07:05:11.953788 update_engine[1878]: I0702 07:05:11.952140 1878 update_attempter.cc:509] Updating boot flags... Jul 2 07:05:12.143582 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3114) Jul 2 07:05:15.703402 sudo[2214]: pam_unix(sudo:session): session closed for user root Jul 2 07:05:15.701000 audit[2214]: USER_END pid=2214 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:05:15.703000 audit[2214]: CRED_DISP pid=2214 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:05:15.708751 kernel: audit: type=1106 audit(1719903915.701:215): pid=2214 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:05:15.708860 kernel: audit: type=1104 audit(1719903915.703:216): pid=2214 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 07:05:15.730321 sshd[2210]: pam_unix(sshd:session): session closed for user core Jul 2 07:05:15.730000 audit[2210]: USER_END pid=2210 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:05:15.731000 audit[2210]: CRED_DISP pid=2210 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:05:15.735107 systemd[1]: sshd@6-172.31.25.147:22-139.178.89.65:52294.service: Deactivated successfully. Jul 2 07:05:15.736180 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:05:15.739787 kernel: audit: type=1106 audit(1719903915.730:217): pid=2210 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:05:15.739868 kernel: audit: type=1104 audit(1719903915.731:218): pid=2210 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:05:15.739900 kernel: audit: type=1131 audit(1719903915.733:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.25.147:22-139.178.89.65:52294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:05:15.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.25.147:22-139.178.89.65:52294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:05:15.738976 systemd-logind[1877]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:05:15.741451 systemd-logind[1877]: Removed session 7. Jul 2 07:05:20.849284 kubelet[3060]: I0702 07:05:20.849254 3060 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:05:20.852175 containerd[1899]: time="2024-07-02T07:05:20.852126977Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:05:20.853803 kubelet[3060]: I0702 07:05:20.853626 3060 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:05:21.302403 kubelet[3060]: I0702 07:05:21.302367 3060 topology_manager.go:215] "Topology Admit Handler" podUID="f9c38b1e-6c1d-4964-861b-79bfedf4c2b4" podNamespace="kube-system" podName="kube-proxy-czggd" Jul 2 07:05:21.318639 kubelet[3060]: I0702 07:05:21.318604 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9c38b1e-6c1d-4964-861b-79bfedf4c2b4-kube-proxy\") pod \"kube-proxy-czggd\" (UID: \"f9c38b1e-6c1d-4964-861b-79bfedf4c2b4\") " pod="kube-system/kube-proxy-czggd" Jul 2 07:05:21.319050 kubelet[3060]: I0702 07:05:21.319027 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9c38b1e-6c1d-4964-861b-79bfedf4c2b4-xtables-lock\") pod \"kube-proxy-czggd\" (UID: \"f9c38b1e-6c1d-4964-861b-79bfedf4c2b4\") " pod="kube-system/kube-proxy-czggd" Jul 2 07:05:21.319213 kubelet[3060]: I0702 07:05:21.319201 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9c38b1e-6c1d-4964-861b-79bfedf4c2b4-lib-modules\") pod \"kube-proxy-czggd\" (UID: \"f9c38b1e-6c1d-4964-861b-79bfedf4c2b4\") " pod="kube-system/kube-proxy-czggd" Jul 2 07:05:21.319319 kubelet[3060]: I0702 07:05:21.319310 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf4v9\" (UniqueName: \"kubernetes.io/projected/f9c38b1e-6c1d-4964-861b-79bfedf4c2b4-kube-api-access-jf4v9\") pod \"kube-proxy-czggd\" (UID: \"f9c38b1e-6c1d-4964-861b-79bfedf4c2b4\") " pod="kube-system/kube-proxy-czggd" Jul 2 07:05:21.611886 containerd[1899]: time="2024-07-02T07:05:21.610995008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czggd,Uid:f9c38b1e-6c1d-4964-861b-79bfedf4c2b4,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:21.651585 containerd[1899]: time="2024-07-02T07:05:21.651240034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:21.651813 containerd[1899]: time="2024-07-02T07:05:21.651596609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:21.651813 containerd[1899]: time="2024-07-02T07:05:21.651633542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:21.651813 containerd[1899]: time="2024-07-02T07:05:21.651653549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:21.691733 systemd[1]: run-containerd-runc-k8s.io-5906c7c796e22bf4eca25bd6fa4d2a6179318af9e83a63ed4955a2276e1ea4c2-runc.IJrn9Y.mount: Deactivated successfully. Jul 2 07:05:21.740365 containerd[1899]: time="2024-07-02T07:05:21.740282898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czggd,Uid:f9c38b1e-6c1d-4964-861b-79bfedf4c2b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5906c7c796e22bf4eca25bd6fa4d2a6179318af9e83a63ed4955a2276e1ea4c2\"" Jul 2 07:05:21.747257 containerd[1899]: time="2024-07-02T07:05:21.746694717Z" level=info msg="CreateContainer within sandbox \"5906c7c796e22bf4eca25bd6fa4d2a6179318af9e83a63ed4955a2276e1ea4c2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:05:21.800105 containerd[1899]: time="2024-07-02T07:05:21.799976463Z" level=info msg="CreateContainer within sandbox \"5906c7c796e22bf4eca25bd6fa4d2a6179318af9e83a63ed4955a2276e1ea4c2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"570d40be0aae723ccaceb83f953892a21cce30653b1a0a83ef09ff0676fb2ca5\"" Jul 2 07:05:21.801845 containerd[1899]: time="2024-07-02T07:05:21.801803002Z" level=info msg="StartContainer for \"570d40be0aae723ccaceb83f953892a21cce30653b1a0a83ef09ff0676fb2ca5\"" Jul 2 07:05:21.864475 kubelet[3060]: I0702 07:05:21.864013 3060 topology_manager.go:215] "Topology Admit Handler" podUID="7947e604-f16b-4057-b28b-323a5d4b643c" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-mhhzx" Jul 2 07:05:21.924265 kubelet[3060]: I0702 07:05:21.923145 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn2wx\" (UniqueName: \"kubernetes.io/projected/7947e604-f16b-4057-b28b-323a5d4b643c-kube-api-access-rn2wx\") pod \"tigera-operator-76c4974c85-mhhzx\" (UID: \"7947e604-f16b-4057-b28b-323a5d4b643c\") " pod="tigera-operator/tigera-operator-76c4974c85-mhhzx" Jul 2 07:05:21.924265 kubelet[3060]: I0702 07:05:21.923229 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7947e604-f16b-4057-b28b-323a5d4b643c-var-lib-calico\") pod \"tigera-operator-76c4974c85-mhhzx\" (UID: \"7947e604-f16b-4057-b28b-323a5d4b643c\") " pod="tigera-operator/tigera-operator-76c4974c85-mhhzx" Jul 2 07:05:21.943106 containerd[1899]: time="2024-07-02T07:05:21.943042808Z" level=info msg="StartContainer for \"570d40be0aae723ccaceb83f953892a21cce30653b1a0a83ef09ff0676fb2ca5\" returns successfully" Jul 2 07:05:22.169815 containerd[1899]: time="2024-07-02T07:05:22.169701315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-mhhzx,Uid:7947e604-f16b-4057-b28b-323a5d4b643c,Namespace:tigera-operator,Attempt:0,}" Jul 2 07:05:22.199280 containerd[1899]: time="2024-07-02T07:05:22.199188381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:22.199651 containerd[1899]: time="2024-07-02T07:05:22.199249942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:22.199651 containerd[1899]: time="2024-07-02T07:05:22.199279888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:22.199651 containerd[1899]: time="2024-07-02T07:05:22.199354471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:22.283978 containerd[1899]: time="2024-07-02T07:05:22.283925389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-mhhzx,Uid:7947e604-f16b-4057-b28b-323a5d4b643c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e21fadad186d46978d41d2462aafca2a058cf1416645c4994a990b81ef7174e8\"" Jul 2 07:05:22.287882 containerd[1899]: time="2024-07-02T07:05:22.287849789Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 07:05:22.410245 kernel: audit: type=1325 audit(1719903922.404:220): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=3379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.410410 kernel: audit: type=1300 audit(1719903922.404:220): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe63327d70 a2=0 a3=7ffe63327d5c items=0 ppid=3296 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.404000 audit[3379]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=3379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.404000 audit[3379]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe63327d70 a2=0 a3=7ffe63327d5c items=0 ppid=3296 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 07:05:22.414417 kernel: audit: type=1327 audit(1719903922.404:220): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 07:05:22.414500 kernel: audit: type=1325 audit(1719903922.404:221): table=nat:39 family=10 entries=1 op=nft_register_chain pid=3380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.404000 audit[3380]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=3380 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.404000 audit[3380]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff2be7900 a2=0 a3=7ffff2be78ec items=0 ppid=3296 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.418582 kernel: audit: type=1300 audit(1719903922.404:221): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff2be7900 a2=0 a3=7ffff2be78ec items=0 ppid=3296 pid=3380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 07:05:22.421375 kernel: audit: type=1327 audit(1719903922.404:221): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 07:05:22.406000 audit[3381]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=3381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.427879 kernel: audit: type=1325 audit(1719903922.406:222): table=filter:40 family=10 entries=1 op=nft_register_chain pid=3381 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.427966 kernel: audit: type=1300 audit(1719903922.406:222): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0dc7de70 a2=0 a3=7ffc0dc7de5c items=0 ppid=3296 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.406000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0dc7de70 a2=0 a3=7ffc0dc7de5c items=0 ppid=3296 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.431129 kernel: audit: type=1327 audit(1719903922.406:222): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 07:05:22.406000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 07:05:22.406000 audit[3382]: NETFILTER_CFG table=mangle:41 family=2 entries=1 op=nft_register_chain pid=3382 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.406000 audit[3382]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff43b9d150 a2=0 a3=7fff43b9d13c items=0 ppid=3296 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.406000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 07:05:22.410000 audit[3383]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=3383 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.410000 audit[3383]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd20361c40 a2=0 a3=7ffd20361c2c items=0 ppid=3296 pid=3383 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.433588 kernel: audit: type=1325 audit(1719903922.406:223): table=mangle:41 family=2 entries=1 op=nft_register_chain pid=3382 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.410000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 07:05:22.412000 audit[3384]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=3384 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.412000 audit[3384]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5d6d2c20 a2=0 a3=7ffd5d6d2c0c items=0 ppid=3296 pid=3384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.412000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 07:05:22.527000 audit[3385]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3385 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.527000 audit[3385]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc8e905a90 a2=0 a3=7ffc8e905a7c items=0 ppid=3296 pid=3385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.527000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 07:05:22.533000 audit[3387]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3387 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.533000 audit[3387]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd7bdac5d0 a2=0 a3=7ffd7bdac5bc items=0 ppid=3296 pid=3387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.533000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 2 07:05:22.541000 audit[3390]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3390 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.541000 audit[3390]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffddab7b1d0 a2=0 a3=7ffddab7b1bc items=0 ppid=3296 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 2 07:05:22.543000 audit[3391]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3391 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.543000 audit[3391]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc47aca2a0 a2=0 a3=7ffc47aca28c items=0 ppid=3296 pid=3391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.543000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 07:05:22.547000 audit[3393]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3393 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.547000 audit[3393]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc6e73b030 a2=0 a3=7ffc6e73b01c items=0 ppid=3296 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.547000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 07:05:22.549000 audit[3394]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3394 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.549000 audit[3394]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5ed065e0 a2=0 a3=7fff5ed065cc items=0 ppid=3296 pid=3394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.549000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 07:05:22.553000 audit[3396]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3396 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.553000 audit[3396]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffca2066500 a2=0 a3=7ffca20664ec items=0 ppid=3296 pid=3396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.553000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 07:05:22.559000 audit[3399]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3399 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.559000 audit[3399]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc8ba206f0 a2=0 a3=7ffc8ba206dc items=0 ppid=3296 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.559000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 2 07:05:22.561000 audit[3400]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3400 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.561000 audit[3400]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcecbd7ba0 a2=0 a3=7ffcecbd7b8c items=0 ppid=3296 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.561000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 07:05:22.564000 audit[3402]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3402 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.564000 audit[3402]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff612b49f0 a2=0 a3=7fff612b49dc items=0 ppid=3296 pid=3402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 07:05:22.565000 audit[3403]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3403 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.565000 audit[3403]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc1ad42850 a2=0 a3=7ffc1ad4283c items=0 ppid=3296 pid=3403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.565000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 07:05:22.569000 audit[3405]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3405 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.569000 audit[3405]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffefca4b820 a2=0 a3=7ffefca4b80c items=0 ppid=3296 pid=3405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.569000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 07:05:22.574000 audit[3408]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3408 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.574000 audit[3408]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffec87760c0 a2=0 a3=7ffec87760ac items=0 ppid=3296 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.574000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 07:05:22.582000 audit[3411]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3411 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.582000 audit[3411]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc97fd7490 a2=0 a3=7ffc97fd747c items=0 ppid=3296 pid=3411 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.582000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 07:05:22.583000 audit[3412]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.583000 audit[3412]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff87938270 a2=0 a3=7fff8793825c items=0 ppid=3296 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 07:05:22.588000 audit[3414]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.588000 audit[3414]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd9ca14930 a2=0 a3=7ffd9ca1491c items=0 ppid=3296 pid=3414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 07:05:22.593000 audit[3417]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.593000 audit[3417]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffb93187e0 a2=0 a3=7fffb93187cc items=0 ppid=3296 pid=3417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.593000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 07:05:22.596000 audit[3418]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3418 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.596000 audit[3418]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda8cf9140 a2=0 a3=7ffda8cf912c items=0 ppid=3296 pid=3418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.596000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 07:05:22.600000 audit[3420]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3420 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 07:05:22.600000 audit[3420]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffdb050d140 a2=0 a3=7ffdb050d12c items=0 ppid=3296 pid=3420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.600000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 07:05:22.663000 audit[3426]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:22.663000 audit[3426]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd9e03feb0 a2=0 a3=7ffd9e03fe9c items=0 ppid=3296 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:22.673000 audit[3426]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:22.673000 audit[3426]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd9e03feb0 a2=0 a3=7ffd9e03fe9c items=0 ppid=3296 pid=3426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.673000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:22.676000 audit[3432]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3432 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.676000 audit[3432]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcc0be81b0 a2=0 a3=7ffcc0be819c items=0 ppid=3296 pid=3432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.676000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 07:05:22.681000 audit[3434]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3434 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.681000 audit[3434]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffccc738ea0 a2=0 a3=7ffccc738e8c items=0 ppid=3296 pid=3434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 2 07:05:22.687000 audit[3437]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3437 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.687000 audit[3437]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc73a6f5c0 a2=0 a3=7ffc73a6f5ac items=0 ppid=3296 pid=3437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.687000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 2 07:05:22.689000 audit[3438]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3438 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.689000 audit[3438]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc6e7324a0 a2=0 a3=7ffc6e73248c items=0 ppid=3296 pid=3438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.689000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 07:05:22.694000 audit[3440]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3440 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.694000 audit[3440]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd0e066cc0 a2=0 a3=7ffd0e066cac items=0 ppid=3296 pid=3440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.694000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 07:05:22.696000 audit[3441]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3441 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.696000 audit[3441]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc0311d5c0 a2=0 a3=7ffc0311d5ac items=0 ppid=3296 pid=3441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.696000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 07:05:22.700000 audit[3443]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3443 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.700000 audit[3443]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe85d41190 a2=0 a3=7ffe85d4117c items=0 ppid=3296 pid=3443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.700000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 2 07:05:22.707000 audit[3446]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.707000 audit[3446]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffecd34b9c0 a2=0 a3=7ffecd34b9ac items=0 ppid=3296 pid=3446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.707000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 07:05:22.709000 audit[3447]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.709000 audit[3447]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff095bd990 a2=0 a3=7fff095bd97c items=0 ppid=3296 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 07:05:22.715000 audit[3449]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3449 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.715000 audit[3449]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd86b76130 a2=0 a3=7ffd86b7611c items=0 ppid=3296 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 07:05:22.717000 audit[3450]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.717000 audit[3450]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea0940c30 a2=0 a3=7ffea0940c1c items=0 ppid=3296 pid=3450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.717000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 07:05:22.723000 audit[3452]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.723000 audit[3452]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff1b34ee90 a2=0 a3=7fff1b34ee7c items=0 ppid=3296 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.723000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 07:05:22.741000 audit[3455]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3455 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.741000 audit[3455]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc763a1d40 a2=0 a3=7ffc763a1d2c items=0 ppid=3296 pid=3455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 07:05:22.751000 audit[3458]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3458 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.751000 audit[3458]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc666e88b0 a2=0 a3=7ffc666e889c items=0 ppid=3296 pid=3458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.751000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 2 07:05:22.753000 audit[3459]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.753000 audit[3459]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc932b7600 a2=0 a3=7ffc932b75ec items=0 ppid=3296 pid=3459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.753000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 07:05:22.756000 audit[3461]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3461 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.756000 audit[3461]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd861916a0 a2=0 a3=7ffd8619168c items=0 ppid=3296 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.756000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 07:05:22.761000 audit[3464]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.761000 audit[3464]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffc844a31e0 a2=0 a3=7ffc844a31cc items=0 ppid=3296 pid=3464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 07:05:22.763000 audit[3465]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3465 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.763000 audit[3465]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea6b77450 a2=0 a3=7ffea6b7743c items=0 ppid=3296 pid=3465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.763000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 07:05:22.767000 audit[3467]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.767000 audit[3467]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffcd902a660 a2=0 a3=7ffcd902a64c items=0 ppid=3296 pid=3467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.767000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 07:05:22.769000 audit[3468]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.769000 audit[3468]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3af59770 a2=0 a3=7ffc3af5975c items=0 ppid=3296 pid=3468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.769000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 07:05:22.773000 audit[3470]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3470 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.773000 audit[3470]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff038fcdb0 a2=0 a3=7fff038fcd9c items=0 ppid=3296 pid=3470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.773000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 07:05:22.779000 audit[3473]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3473 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 07:05:22.779000 audit[3473]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff5cdd1f20 a2=0 a3=7fff5cdd1f0c items=0 ppid=3296 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.779000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 07:05:22.784000 audit[3475]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 07:05:22.784000 audit[3475]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffed6f71b80 a2=0 a3=7ffed6f71b6c items=0 ppid=3296 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.784000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:22.785000 audit[3475]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3475 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 07:05:22.785000 audit[3475]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffed6f71b80 a2=0 a3=7ffed6f71b6c items=0 ppid=3296 pid=3475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:22.785000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:22.932296 kubelet[3060]: I0702 07:05:22.929116 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-czggd" podStartSLOduration=1.927365231 podCreationTimestamp="2024-07-02 07:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:05:22.927216218 +0000 UTC m=+13.292289800" watchObservedRunningTime="2024-07-02 07:05:22.927365231 +0000 UTC m=+13.292438812" Jul 2 07:05:23.610493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3626160940.mount: Deactivated successfully. Jul 2 07:05:24.554520 containerd[1899]: time="2024-07-02T07:05:24.554473418Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:24.556426 containerd[1899]: time="2024-07-02T07:05:24.556361850Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076084" Jul 2 07:05:24.558007 containerd[1899]: time="2024-07-02T07:05:24.557973493Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:24.561670 containerd[1899]: time="2024-07-02T07:05:24.561637848Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:24.563935 containerd[1899]: time="2024-07-02T07:05:24.563900680Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:24.565099 containerd[1899]: time="2024-07-02T07:05:24.565061429Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.27702302s" Jul 2 07:05:24.565237 containerd[1899]: time="2024-07-02T07:05:24.565214483Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 07:05:24.600881 containerd[1899]: time="2024-07-02T07:05:24.600833165Z" level=info msg="CreateContainer within sandbox \"e21fadad186d46978d41d2462aafca2a058cf1416645c4994a990b81ef7174e8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 07:05:24.623442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119883911.mount: Deactivated successfully. Jul 2 07:05:24.630573 containerd[1899]: time="2024-07-02T07:05:24.630506843Z" level=info msg="CreateContainer within sandbox \"e21fadad186d46978d41d2462aafca2a058cf1416645c4994a990b81ef7174e8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8e211251ab2cc66540c09d6083cd2dac7495dd039507b01a503ee0e048059470\"" Jul 2 07:05:24.631901 containerd[1899]: time="2024-07-02T07:05:24.631864846Z" level=info msg="StartContainer for \"8e211251ab2cc66540c09d6083cd2dac7495dd039507b01a503ee0e048059470\"" Jul 2 07:05:24.746943 containerd[1899]: time="2024-07-02T07:05:24.746891919Z" level=info msg="StartContainer for \"8e211251ab2cc66540c09d6083cd2dac7495dd039507b01a503ee0e048059470\" returns successfully" Jul 2 07:05:25.616187 systemd[1]: run-containerd-runc-k8s.io-8e211251ab2cc66540c09d6083cd2dac7495dd039507b01a503ee0e048059470-runc.oTvpJi.mount: Deactivated successfully. Jul 2 07:05:27.724010 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 2 07:05:27.724165 kernel: audit: type=1325 audit(1719903927.722:271): table=filter:89 family=2 entries=15 op=nft_register_rule pid=3523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:27.722000 audit[3523]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:27.722000 audit[3523]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc3aec9bc0 a2=0 a3=7ffc3aec9bac items=0 ppid=3296 pid=3523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:27.727968 kernel: audit: type=1300 audit(1719903927.722:271): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffc3aec9bc0 a2=0 a3=7ffc3aec9bac items=0 ppid=3296 pid=3523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:27.728038 kernel: audit: type=1327 audit(1719903927.722:271): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:27.722000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:27.722000 audit[3523]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:27.737728 kernel: audit: type=1325 audit(1719903927.722:272): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:27.738282 kernel: audit: type=1300 audit(1719903927.722:272): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc3aec9bc0 a2=0 a3=0 items=0 ppid=3296 pid=3523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:27.722000 audit[3523]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc3aec9bc0 a2=0 a3=0 items=0 ppid=3296 pid=3523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:27.722000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:27.740575 kernel: audit: type=1327 audit(1719903927.722:272): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:27.743000 audit[3525]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:27.746940 kernel: audit: type=1325 audit(1719903927.743:273): table=filter:91 family=2 entries=16 op=nft_register_rule pid=3525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:27.743000 audit[3525]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe88fe5a70 a2=0 a3=7ffe88fe5a5c items=0 ppid=3296 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:27.743000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:27.751172 kernel: audit: type=1300 audit(1719903927.743:273): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe88fe5a70 a2=0 a3=7ffe88fe5a5c items=0 ppid=3296 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:27.751242 kernel: audit: type=1327 audit(1719903927.743:273): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:27.751323 kernel: audit: type=1325 audit(1719903927.744:274): table=nat:92 family=2 entries=12 op=nft_register_rule pid=3525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:27.744000 audit[3525]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:27.744000 audit[3525]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe88fe5a70 a2=0 a3=0 items=0 ppid=3296 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:27.744000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:27.881208 kubelet[3060]: I0702 07:05:27.881165 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-mhhzx" podStartSLOduration=4.5931707 podCreationTimestamp="2024-07-02 07:05:21 +0000 UTC" firstStartedPulling="2024-07-02 07:05:22.285180482 +0000 UTC m=+12.650254053" lastFinishedPulling="2024-07-02 07:05:24.571497058 +0000 UTC m=+14.936570627" observedRunningTime="2024-07-02 07:05:24.953204107 +0000 UTC m=+15.318277687" watchObservedRunningTime="2024-07-02 07:05:27.879487274 +0000 UTC m=+18.244560858" Jul 2 07:05:27.881826 kubelet[3060]: I0702 07:05:27.881347 3060 topology_manager.go:215] "Topology Admit Handler" podUID="f0bfbd12-31a0-40a9-ae02-2a2db6274c6a" podNamespace="calico-system" podName="calico-typha-79f9fd9777-55tv6" Jul 2 07:05:27.975498 kubelet[3060]: I0702 07:05:27.975386 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0bfbd12-31a0-40a9-ae02-2a2db6274c6a-tigera-ca-bundle\") pod \"calico-typha-79f9fd9777-55tv6\" (UID: \"f0bfbd12-31a0-40a9-ae02-2a2db6274c6a\") " pod="calico-system/calico-typha-79f9fd9777-55tv6" Jul 2 07:05:27.975795 kubelet[3060]: I0702 07:05:27.975781 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f0bfbd12-31a0-40a9-ae02-2a2db6274c6a-typha-certs\") pod \"calico-typha-79f9fd9777-55tv6\" (UID: \"f0bfbd12-31a0-40a9-ae02-2a2db6274c6a\") " pod="calico-system/calico-typha-79f9fd9777-55tv6" Jul 2 07:05:27.975911 kubelet[3060]: I0702 07:05:27.975897 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9rpd\" (UniqueName: \"kubernetes.io/projected/f0bfbd12-31a0-40a9-ae02-2a2db6274c6a-kube-api-access-h9rpd\") pod \"calico-typha-79f9fd9777-55tv6\" (UID: \"f0bfbd12-31a0-40a9-ae02-2a2db6274c6a\") " pod="calico-system/calico-typha-79f9fd9777-55tv6" Jul 2 07:05:28.030466 kubelet[3060]: I0702 07:05:28.030431 3060 topology_manager.go:215] "Topology Admit Handler" podUID="456ba6dc-179b-4184-8e58-b4737ff9024e" podNamespace="calico-system" podName="calico-node-qmr4t" Jul 2 07:05:28.076998 kubelet[3060]: I0702 07:05:28.076972 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/456ba6dc-179b-4184-8e58-b4737ff9024e-cni-log-dir\") pod \"calico-node-qmr4t\" (UID: \"456ba6dc-179b-4184-8e58-b4737ff9024e\") " pod="calico-system/calico-node-qmr4t" Jul 2 07:05:28.077281 kubelet[3060]: I0702 07:05:28.077268 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/456ba6dc-179b-4184-8e58-b4737ff9024e-lib-modules\") pod \"calico-node-qmr4t\" (UID: \"456ba6dc-179b-4184-8e58-b4737ff9024e\") " pod="calico-system/calico-node-qmr4t" Jul 2 07:05:28.077379 kubelet[3060]: I0702 07:05:28.077371 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/456ba6dc-179b-4184-8e58-b4737ff9024e-var-lib-calico\") pod \"calico-node-qmr4t\" (UID: \"456ba6dc-179b-4184-8e58-b4737ff9024e\") " pod="calico-system/calico-node-qmr4t" Jul 2 07:05:28.077458 kubelet[3060]: I0702 07:05:28.077451 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/456ba6dc-179b-4184-8e58-b4737ff9024e-flexvol-driver-host\") pod \"calico-node-qmr4t\" (UID: \"456ba6dc-179b-4184-8e58-b4737ff9024e\") " pod="calico-system/calico-node-qmr4t" Jul 2 07:05:28.077539 kubelet[3060]: I0702 07:05:28.077532 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/456ba6dc-179b-4184-8e58-b4737ff9024e-policysync\") pod \"calico-node-qmr4t\" (UID: \"456ba6dc-179b-4184-8e58-b4737ff9024e\") " pod="calico-system/calico-node-qmr4t" Jul 2 07:05:28.077640 kubelet[3060]: I0702 07:05:28.077625 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/456ba6dc-179b-4184-8e58-b4737ff9024e-var-run-calico\") pod \"calico-node-qmr4t\" (UID: \"456ba6dc-179b-4184-8e58-b4737ff9024e\") " pod="calico-system/calico-node-qmr4t" Jul 2 07:05:28.077782 kubelet[3060]: I0702 07:05:28.077769 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/456ba6dc-179b-4184-8e58-b4737ff9024e-xtables-lock\") pod \"calico-node-qmr4t\" (UID: \"456ba6dc-179b-4184-8e58-b4737ff9024e\") " pod="calico-system/calico-node-qmr4t" Jul 2 07:05:28.077877 kubelet[3060]: I0702 07:05:28.077868 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/456ba6dc-179b-4184-8e58-b4737ff9024e-cni-bin-dir\") pod \"calico-node-qmr4t\" (UID: \"456ba6dc-179b-4184-8e58-b4737ff9024e\") " pod="calico-system/calico-node-qmr4t" Jul 2 07:05:28.077997 kubelet[3060]: I0702 07:05:28.077988 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/456ba6dc-179b-4184-8e58-b4737ff9024e-cni-net-dir\") pod \"calico-node-qmr4t\" (UID: \"456ba6dc-179b-4184-8e58-b4737ff9024e\") " pod="calico-system/calico-node-qmr4t" Jul 2 07:05:28.078139 kubelet[3060]: I0702 07:05:28.078129 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/456ba6dc-179b-4184-8e58-b4737ff9024e-node-certs\") pod \"calico-node-qmr4t\" (UID: \"456ba6dc-179b-4184-8e58-b4737ff9024e\") " pod="calico-system/calico-node-qmr4t" Jul 2 07:05:28.078317 kubelet[3060]: I0702 07:05:28.078305 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtd9v\" (UniqueName: \"kubernetes.io/projected/456ba6dc-179b-4184-8e58-b4737ff9024e-kube-api-access-jtd9v\") pod \"calico-node-qmr4t\" (UID: \"456ba6dc-179b-4184-8e58-b4737ff9024e\") " pod="calico-system/calico-node-qmr4t" Jul 2 07:05:28.078430 kubelet[3060]: I0702 07:05:28.078422 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/456ba6dc-179b-4184-8e58-b4737ff9024e-tigera-ca-bundle\") pod \"calico-node-qmr4t\" (UID: \"456ba6dc-179b-4184-8e58-b4737ff9024e\") " pod="calico-system/calico-node-qmr4t" Jul 2 07:05:28.135222 kubelet[3060]: I0702 07:05:28.135190 3060 topology_manager.go:215] "Topology Admit Handler" podUID="599f2da3-47a6-4b46-afa0-843612b872b6" podNamespace="calico-system" podName="csi-node-driver-fxlxj" Jul 2 07:05:28.135896 kubelet[3060]: E0702 07:05:28.135872 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxlxj" podUID="599f2da3-47a6-4b46-afa0-843612b872b6" Jul 2 07:05:28.179070 kubelet[3060]: I0702 07:05:28.179030 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gsvx\" (UniqueName: \"kubernetes.io/projected/599f2da3-47a6-4b46-afa0-843612b872b6-kube-api-access-6gsvx\") pod \"csi-node-driver-fxlxj\" (UID: \"599f2da3-47a6-4b46-afa0-843612b872b6\") " pod="calico-system/csi-node-driver-fxlxj" Jul 2 07:05:28.179247 kubelet[3060]: I0702 07:05:28.179232 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/599f2da3-47a6-4b46-afa0-843612b872b6-registration-dir\") pod \"csi-node-driver-fxlxj\" (UID: \"599f2da3-47a6-4b46-afa0-843612b872b6\") " pod="calico-system/csi-node-driver-fxlxj" Jul 2 07:05:28.179309 kubelet[3060]: I0702 07:05:28.179284 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/599f2da3-47a6-4b46-afa0-843612b872b6-varrun\") pod \"csi-node-driver-fxlxj\" (UID: \"599f2da3-47a6-4b46-afa0-843612b872b6\") " pod="calico-system/csi-node-driver-fxlxj" Jul 2 07:05:28.179357 kubelet[3060]: I0702 07:05:28.179337 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/599f2da3-47a6-4b46-afa0-843612b872b6-kubelet-dir\") pod \"csi-node-driver-fxlxj\" (UID: \"599f2da3-47a6-4b46-afa0-843612b872b6\") " pod="calico-system/csi-node-driver-fxlxj" Jul 2 07:05:28.179405 kubelet[3060]: I0702 07:05:28.179370 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/599f2da3-47a6-4b46-afa0-843612b872b6-socket-dir\") pod \"csi-node-driver-fxlxj\" (UID: \"599f2da3-47a6-4b46-afa0-843612b872b6\") " pod="calico-system/csi-node-driver-fxlxj" Jul 2 07:05:28.182416 kubelet[3060]: E0702 07:05:28.182393 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.182601 kubelet[3060]: W0702 07:05:28.182584 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.182716 kubelet[3060]: E0702 07:05:28.182706 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.184670 kubelet[3060]: E0702 07:05:28.184654 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.184796 kubelet[3060]: W0702 07:05:28.184782 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.184920 kubelet[3060]: E0702 07:05:28.184909 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.185270 kubelet[3060]: E0702 07:05:28.185255 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.185363 kubelet[3060]: W0702 07:05:28.185351 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.185446 kubelet[3060]: E0702 07:05:28.185437 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.185772 kubelet[3060]: E0702 07:05:28.185759 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.185880 kubelet[3060]: W0702 07:05:28.185867 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.185964 kubelet[3060]: E0702 07:05:28.185955 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.188646 kubelet[3060]: E0702 07:05:28.188631 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.188763 kubelet[3060]: W0702 07:05:28.188750 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.188847 kubelet[3060]: E0702 07:05:28.188838 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.189115 kubelet[3060]: E0702 07:05:28.189104 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.189198 kubelet[3060]: W0702 07:05:28.189188 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.189283 kubelet[3060]: E0702 07:05:28.189274 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.189540 kubelet[3060]: E0702 07:05:28.189528 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.189736 kubelet[3060]: W0702 07:05:28.189720 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.189839 kubelet[3060]: E0702 07:05:28.189829 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.190105 kubelet[3060]: E0702 07:05:28.190093 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.190190 kubelet[3060]: W0702 07:05:28.190179 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.190271 kubelet[3060]: E0702 07:05:28.190263 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.197332 containerd[1899]: time="2024-07-02T07:05:28.196865343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79f9fd9777-55tv6,Uid:f0bfbd12-31a0-40a9-ae02-2a2db6274c6a,Namespace:calico-system,Attempt:0,}" Jul 2 07:05:28.197894 kubelet[3060]: E0702 07:05:28.197877 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.198072 kubelet[3060]: W0702 07:05:28.197993 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.198290 kubelet[3060]: E0702 07:05:28.198275 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.198546 kubelet[3060]: E0702 07:05:28.198534 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.198658 kubelet[3060]: W0702 07:05:28.198644 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.199594 kubelet[3060]: E0702 07:05:28.199580 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.199860 kubelet[3060]: E0702 07:05:28.199848 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.199946 kubelet[3060]: W0702 07:05:28.199934 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.200127 kubelet[3060]: E0702 07:05:28.200103 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.200493 kubelet[3060]: E0702 07:05:28.200480 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.200606 kubelet[3060]: W0702 07:05:28.200593 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.200775 kubelet[3060]: E0702 07:05:28.200765 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.206931 kubelet[3060]: E0702 07:05:28.206902 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.207069 kubelet[3060]: W0702 07:05:28.207056 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.207267 kubelet[3060]: E0702 07:05:28.207256 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.207948 kubelet[3060]: E0702 07:05:28.207933 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.208088 kubelet[3060]: W0702 07:05:28.208073 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.208204 kubelet[3060]: E0702 07:05:28.208192 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.208845 kubelet[3060]: E0702 07:05:28.208834 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.213450 kubelet[3060]: W0702 07:05:28.208916 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.213450 kubelet[3060]: E0702 07:05:28.208933 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.213450 kubelet[3060]: E0702 07:05:28.209192 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.213450 kubelet[3060]: W0702 07:05:28.209200 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.213450 kubelet[3060]: E0702 07:05:28.209214 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.213450 kubelet[3060]: E0702 07:05:28.209452 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.213450 kubelet[3060]: W0702 07:05:28.209460 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.213450 kubelet[3060]: E0702 07:05:28.209472 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.216131 kubelet[3060]: E0702 07:05:28.216112 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.216262 kubelet[3060]: W0702 07:05:28.216246 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.216368 kubelet[3060]: E0702 07:05:28.216357 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.287195 kubelet[3060]: E0702 07:05:28.281211 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.287195 kubelet[3060]: W0702 07:05:28.281252 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.287195 kubelet[3060]: E0702 07:05:28.281282 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.287195 kubelet[3060]: E0702 07:05:28.281724 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.287195 kubelet[3060]: W0702 07:05:28.281737 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.287195 kubelet[3060]: E0702 07:05:28.281763 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.287195 kubelet[3060]: E0702 07:05:28.282026 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.287195 kubelet[3060]: W0702 07:05:28.282036 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.287195 kubelet[3060]: E0702 07:05:28.282056 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.287195 kubelet[3060]: E0702 07:05:28.282370 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.287782 kubelet[3060]: W0702 07:05:28.282381 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.287782 kubelet[3060]: E0702 07:05:28.282404 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.288509 kubelet[3060]: E0702 07:05:28.288300 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.288509 kubelet[3060]: W0702 07:05:28.288322 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.288509 kubelet[3060]: E0702 07:05:28.288446 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.288899 kubelet[3060]: E0702 07:05:28.288888 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.288980 kubelet[3060]: W0702 07:05:28.288969 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.289189 kubelet[3060]: E0702 07:05:28.289179 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.289407 kubelet[3060]: E0702 07:05:28.289397 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.289508 kubelet[3060]: W0702 07:05:28.289474 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.289711 kubelet[3060]: E0702 07:05:28.289695 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.289835 kubelet[3060]: E0702 07:05:28.289820 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.289894 kubelet[3060]: W0702 07:05:28.289836 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.290805 kubelet[3060]: E0702 07:05:28.290791 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.290895 kubelet[3060]: E0702 07:05:28.290805 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.290961 kubelet[3060]: W0702 07:05:28.290952 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.291145 kubelet[3060]: E0702 07:05:28.291129 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.291838 kubelet[3060]: E0702 07:05:28.291825 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.291940 kubelet[3060]: W0702 07:05:28.291930 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.292108 kubelet[3060]: E0702 07:05:28.292098 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.292386 kubelet[3060]: E0702 07:05:28.292374 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.292479 kubelet[3060]: W0702 07:05:28.292466 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.292594 kubelet[3060]: E0702 07:05:28.292585 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.292906 kubelet[3060]: E0702 07:05:28.292895 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.292999 kubelet[3060]: W0702 07:05:28.292988 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.293155 kubelet[3060]: E0702 07:05:28.293146 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.293352 kubelet[3060]: E0702 07:05:28.293344 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.293421 kubelet[3060]: W0702 07:05:28.293412 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.293595 kubelet[3060]: E0702 07:05:28.293547 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.293818 kubelet[3060]: E0702 07:05:28.293809 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.293892 kubelet[3060]: W0702 07:05:28.293883 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.294050 kubelet[3060]: E0702 07:05:28.294042 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.294246 kubelet[3060]: E0702 07:05:28.294237 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.294315 kubelet[3060]: W0702 07:05:28.294306 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.294455 kubelet[3060]: E0702 07:05:28.294447 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.294689 kubelet[3060]: E0702 07:05:28.294655 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.297979 kubelet[3060]: W0702 07:05:28.297958 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.298198 kubelet[3060]: E0702 07:05:28.298188 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.298437 kubelet[3060]: E0702 07:05:28.298428 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.298511 kubelet[3060]: W0702 07:05:28.298502 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.298700 kubelet[3060]: E0702 07:05:28.298691 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.298929 kubelet[3060]: E0702 07:05:28.298921 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.299008 kubelet[3060]: W0702 07:05:28.298998 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.299168 kubelet[3060]: E0702 07:05:28.299159 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.299381 kubelet[3060]: E0702 07:05:28.299372 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.299448 kubelet[3060]: W0702 07:05:28.299439 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.299739 kubelet[3060]: E0702 07:05:28.299718 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.299954 kubelet[3060]: W0702 07:05:28.299943 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.300161 kubelet[3060]: E0702 07:05:28.299925 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.300337 kubelet[3060]: E0702 07:05:28.300321 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.300611 kubelet[3060]: E0702 07:05:28.300601 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.300714 kubelet[3060]: W0702 07:05:28.300682 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.300918 kubelet[3060]: E0702 07:05:28.300906 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.301162 kubelet[3060]: E0702 07:05:28.301151 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.301258 kubelet[3060]: W0702 07:05:28.301247 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.301429 kubelet[3060]: E0702 07:05:28.301420 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.301671 kubelet[3060]: E0702 07:05:28.301660 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.301891 kubelet[3060]: W0702 07:05:28.301871 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.302149 kubelet[3060]: E0702 07:05:28.302132 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.302419 kubelet[3060]: E0702 07:05:28.302408 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.302758 kubelet[3060]: W0702 07:05:28.302497 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.303003 kubelet[3060]: E0702 07:05:28.302991 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.303492 kubelet[3060]: E0702 07:05:28.303478 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.303729 kubelet[3060]: W0702 07:05:28.303713 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.303822 kubelet[3060]: E0702 07:05:28.303812 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.305647 containerd[1899]: time="2024-07-02T07:05:28.305389876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:28.306159 containerd[1899]: time="2024-07-02T07:05:28.305518504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:28.306679 containerd[1899]: time="2024-07-02T07:05:28.306339755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:28.307584 containerd[1899]: time="2024-07-02T07:05:28.306598871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:28.325350 kubelet[3060]: E0702 07:05:28.325266 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:28.325350 kubelet[3060]: W0702 07:05:28.325285 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:28.325350 kubelet[3060]: E0702 07:05:28.325310 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:28.339390 containerd[1899]: time="2024-07-02T07:05:28.339355896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qmr4t,Uid:456ba6dc-179b-4184-8e58-b4737ff9024e,Namespace:calico-system,Attempt:0,}" Jul 2 07:05:28.384370 containerd[1899]: time="2024-07-02T07:05:28.384129192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:28.384580 containerd[1899]: time="2024-07-02T07:05:28.384434690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:28.384580 containerd[1899]: time="2024-07-02T07:05:28.384527856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:28.384713 containerd[1899]: time="2024-07-02T07:05:28.384609803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:28.446687 containerd[1899]: time="2024-07-02T07:05:28.446641992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79f9fd9777-55tv6,Uid:f0bfbd12-31a0-40a9-ae02-2a2db6274c6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"d3080a1f745ae8490968029df9ffe5e7316ea22d9965fe35cfca906c4db94e0f\"" Jul 2 07:05:28.466617 containerd[1899]: time="2024-07-02T07:05:28.466576244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 07:05:28.480142 containerd[1899]: time="2024-07-02T07:05:28.478998896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qmr4t,Uid:456ba6dc-179b-4184-8e58-b4737ff9024e,Namespace:calico-system,Attempt:0,} returns sandbox id \"2099c45f781cfebbcdc37fea137ba50e08d5dc271dc08c42d86e1e542cff09ae\"" Jul 2 07:05:28.759000 audit[3657]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=3657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:28.759000 audit[3657]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd69a654e0 a2=0 a3=7ffd69a654cc items=0 ppid=3296 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:28.759000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:28.759000 audit[3657]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:28.759000 audit[3657]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd69a654e0 a2=0 a3=0 items=0 ppid=3296 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:28.759000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:29.858116 kubelet[3060]: E0702 07:05:29.857679 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxlxj" podUID="599f2da3-47a6-4b46-afa0-843612b872b6" Jul 2 07:05:31.304285 containerd[1899]: time="2024-07-02T07:05:31.304152017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:31.306039 containerd[1899]: time="2024-07-02T07:05:31.305964410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 07:05:31.308898 containerd[1899]: time="2024-07-02T07:05:31.308845856Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:31.312506 containerd[1899]: time="2024-07-02T07:05:31.312459997Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:31.315759 containerd[1899]: time="2024-07-02T07:05:31.315718885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:31.317011 containerd[1899]: time="2024-07-02T07:05:31.316962331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.847568705s" Jul 2 07:05:31.317122 containerd[1899]: time="2024-07-02T07:05:31.317018447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 07:05:31.337593 containerd[1899]: time="2024-07-02T07:05:31.324854800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 07:05:31.337593 containerd[1899]: time="2024-07-02T07:05:31.331048075Z" level=info msg="CreateContainer within sandbox \"d3080a1f745ae8490968029df9ffe5e7316ea22d9965fe35cfca906c4db94e0f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 07:05:31.498779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3093082463.mount: Deactivated successfully. Jul 2 07:05:31.513637 containerd[1899]: time="2024-07-02T07:05:31.513515133Z" level=info msg="CreateContainer within sandbox \"d3080a1f745ae8490968029df9ffe5e7316ea22d9965fe35cfca906c4db94e0f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6c15fdf386790487b1a59cceb1349281f3598b3995f6976386d846830030b3f0\"" Jul 2 07:05:31.515590 containerd[1899]: time="2024-07-02T07:05:31.514605787Z" level=info msg="StartContainer for \"6c15fdf386790487b1a59cceb1349281f3598b3995f6976386d846830030b3f0\"" Jul 2 07:05:31.663326 containerd[1899]: time="2024-07-02T07:05:31.654966822Z" level=info msg="StartContainer for \"6c15fdf386790487b1a59cceb1349281f3598b3995f6976386d846830030b3f0\" returns successfully" Jul 2 07:05:31.843202 kubelet[3060]: E0702 07:05:31.842813 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxlxj" podUID="599f2da3-47a6-4b46-afa0-843612b872b6" Jul 2 07:05:32.005388 kubelet[3060]: E0702 07:05:32.005351 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.005388 kubelet[3060]: W0702 07:05:32.005375 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.005874 kubelet[3060]: E0702 07:05:32.005439 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.005969 kubelet[3060]: E0702 07:05:32.005951 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.006023 kubelet[3060]: W0702 07:05:32.005971 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.006023 kubelet[3060]: E0702 07:05:32.005992 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.006273 kubelet[3060]: E0702 07:05:32.006261 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.006376 kubelet[3060]: W0702 07:05:32.006364 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.006526 kubelet[3060]: E0702 07:05:32.006465 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.006958 kubelet[3060]: E0702 07:05:32.006945 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.007188 kubelet[3060]: W0702 07:05:32.007174 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.007306 kubelet[3060]: E0702 07:05:32.007295 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.007807 kubelet[3060]: E0702 07:05:32.007793 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.007921 kubelet[3060]: W0702 07:05:32.007908 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.008032 kubelet[3060]: E0702 07:05:32.008019 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.008683 kubelet[3060]: E0702 07:05:32.008666 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.008926 kubelet[3060]: W0702 07:05:32.008682 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.008926 kubelet[3060]: E0702 07:05:32.008879 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.009139 kubelet[3060]: E0702 07:05:32.009113 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.009139 kubelet[3060]: W0702 07:05:32.009139 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.009251 kubelet[3060]: E0702 07:05:32.009159 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.009394 kubelet[3060]: E0702 07:05:32.009379 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.009454 kubelet[3060]: W0702 07:05:32.009404 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.009454 kubelet[3060]: E0702 07:05:32.009422 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.009685 kubelet[3060]: E0702 07:05:32.009671 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.009841 kubelet[3060]: W0702 07:05:32.009686 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.009841 kubelet[3060]: E0702 07:05:32.009813 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.010062 kubelet[3060]: E0702 07:05:32.010037 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.010062 kubelet[3060]: W0702 07:05:32.010062 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.010188 kubelet[3060]: E0702 07:05:32.010078 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.010474 kubelet[3060]: E0702 07:05:32.010458 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.010616 kubelet[3060]: W0702 07:05:32.010474 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.010616 kubelet[3060]: E0702 07:05:32.010491 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.011118 kubelet[3060]: E0702 07:05:32.010880 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.011118 kubelet[3060]: W0702 07:05:32.010892 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.011118 kubelet[3060]: E0702 07:05:32.010908 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.011542 kubelet[3060]: E0702 07:05:32.011481 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.011542 kubelet[3060]: W0702 07:05:32.011508 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.011542 kubelet[3060]: E0702 07:05:32.011527 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.012694 kubelet[3060]: E0702 07:05:32.011878 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.012694 kubelet[3060]: W0702 07:05:32.011891 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.012694 kubelet[3060]: E0702 07:05:32.011908 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.012694 kubelet[3060]: E0702 07:05:32.012194 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.012694 kubelet[3060]: W0702 07:05:32.012204 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.012694 kubelet[3060]: E0702 07:05:32.012220 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.023828 kubelet[3060]: E0702 07:05:32.023789 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.024110 kubelet[3060]: W0702 07:05:32.024089 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.024217 kubelet[3060]: E0702 07:05:32.024205 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.024604 kubelet[3060]: E0702 07:05:32.024590 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.024810 kubelet[3060]: W0702 07:05:32.024789 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.025010 kubelet[3060]: E0702 07:05:32.024997 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.025255 kubelet[3060]: E0702 07:05:32.025237 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.025335 kubelet[3060]: W0702 07:05:32.025256 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.025335 kubelet[3060]: E0702 07:05:32.025281 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.025536 kubelet[3060]: E0702 07:05:32.025518 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.025536 kubelet[3060]: W0702 07:05:32.025532 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.025679 kubelet[3060]: E0702 07:05:32.025565 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.026068 kubelet[3060]: E0702 07:05:32.026055 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.026160 kubelet[3060]: W0702 07:05:32.026149 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.026323 kubelet[3060]: E0702 07:05:32.026313 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.026602 kubelet[3060]: E0702 07:05:32.026590 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.026700 kubelet[3060]: W0702 07:05:32.026679 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.026983 kubelet[3060]: E0702 07:05:32.026972 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.027309 kubelet[3060]: E0702 07:05:32.027297 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.027411 kubelet[3060]: W0702 07:05:32.027398 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.027613 kubelet[3060]: E0702 07:05:32.027601 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.028061 kubelet[3060]: E0702 07:05:32.027939 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.028061 kubelet[3060]: W0702 07:05:32.027952 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.028212 kubelet[3060]: E0702 07:05:32.028191 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.030175 kubelet[3060]: E0702 07:05:32.030158 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.030346 kubelet[3060]: W0702 07:05:32.030331 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.030432 kubelet[3060]: E0702 07:05:32.030422 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.032103 kubelet[3060]: E0702 07:05:32.032088 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.032209 kubelet[3060]: W0702 07:05:32.032197 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.043372 kubelet[3060]: E0702 07:05:32.043344 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.043638 kubelet[3060]: W0702 07:05:32.043619 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.048579 kubelet[3060]: E0702 07:05:32.045493 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.048579 kubelet[3060]: E0702 07:05:32.045529 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.057581 kubelet[3060]: E0702 07:05:32.056954 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.057581 kubelet[3060]: W0702 07:05:32.056981 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.057581 kubelet[3060]: E0702 07:05:32.057023 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.057581 kubelet[3060]: E0702 07:05:32.057448 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.057581 kubelet[3060]: W0702 07:05:32.057461 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.057581 kubelet[3060]: E0702 07:05:32.057485 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.058175 kubelet[3060]: E0702 07:05:32.057994 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.058175 kubelet[3060]: W0702 07:05:32.058006 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.058175 kubelet[3060]: E0702 07:05:32.058111 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.059583 kubelet[3060]: E0702 07:05:32.058491 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.059583 kubelet[3060]: W0702 07:05:32.058504 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.059583 kubelet[3060]: E0702 07:05:32.058525 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.059583 kubelet[3060]: E0702 07:05:32.058935 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.059583 kubelet[3060]: W0702 07:05:32.058946 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.059583 kubelet[3060]: E0702 07:05:32.058970 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.059583 kubelet[3060]: E0702 07:05:32.059416 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.059583 kubelet[3060]: W0702 07:05:32.059428 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.059583 kubelet[3060]: E0702 07:05:32.059449 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.060212 kubelet[3060]: E0702 07:05:32.059675 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:32.060212 kubelet[3060]: W0702 07:05:32.059692 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:32.060212 kubelet[3060]: E0702 07:05:32.059893 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:32.829194 containerd[1899]: time="2024-07-02T07:05:32.829154461Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:32.830926 containerd[1899]: time="2024-07-02T07:05:32.830879889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 07:05:32.833149 containerd[1899]: time="2024-07-02T07:05:32.833118322Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:32.837243 containerd[1899]: time="2024-07-02T07:05:32.837207837Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:32.865090 containerd[1899]: time="2024-07-02T07:05:32.865047185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:32.866115 containerd[1899]: time="2024-07-02T07:05:32.866056426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.541146601s" Jul 2 07:05:32.866231 containerd[1899]: time="2024-07-02T07:05:32.866121493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 07:05:32.870446 containerd[1899]: time="2024-07-02T07:05:32.870404846Z" level=info msg="CreateContainer within sandbox \"2099c45f781cfebbcdc37fea137ba50e08d5dc271dc08c42d86e1e542cff09ae\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 07:05:32.902327 containerd[1899]: time="2024-07-02T07:05:32.902204583Z" level=info msg="CreateContainer within sandbox \"2099c45f781cfebbcdc37fea137ba50e08d5dc271dc08c42d86e1e542cff09ae\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"02f6508b3d1e26f47306070becee335f176e2f97520e2e7157a63179dead147c\"" Jul 2 07:05:32.903267 containerd[1899]: time="2024-07-02T07:05:32.903229600Z" level=info msg="StartContainer for \"02f6508b3d1e26f47306070becee335f176e2f97520e2e7157a63179dead147c\"" Jul 2 07:05:32.980824 kubelet[3060]: I0702 07:05:32.979945 3060 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 07:05:33.015580 containerd[1899]: time="2024-07-02T07:05:33.015526126Z" level=info msg="StartContainer for \"02f6508b3d1e26f47306070becee335f176e2f97520e2e7157a63179dead147c\" returns successfully" Jul 2 07:05:33.036017 kubelet[3060]: E0702 07:05:33.035989 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.036214 kubelet[3060]: W0702 07:05:33.036195 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.036313 kubelet[3060]: E0702 07:05:33.036301 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.037683 kubelet[3060]: E0702 07:05:33.037664 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.037824 kubelet[3060]: W0702 07:05:33.037809 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.037907 kubelet[3060]: E0702 07:05:33.037896 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.038570 kubelet[3060]: E0702 07:05:33.038536 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.038672 kubelet[3060]: W0702 07:05:33.038660 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.038761 kubelet[3060]: E0702 07:05:33.038752 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.039835 kubelet[3060]: E0702 07:05:33.039823 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.039948 kubelet[3060]: W0702 07:05:33.039936 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.040037 kubelet[3060]: E0702 07:05:33.040027 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.040600 kubelet[3060]: E0702 07:05:33.040584 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.040600 kubelet[3060]: W0702 07:05:33.040600 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.040748 kubelet[3060]: E0702 07:05:33.040618 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.040872 kubelet[3060]: E0702 07:05:33.040859 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.040949 kubelet[3060]: W0702 07:05:33.040888 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.040949 kubelet[3060]: E0702 07:05:33.040906 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.041158 kubelet[3060]: E0702 07:05:33.041137 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.041158 kubelet[3060]: W0702 07:05:33.041151 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.041269 kubelet[3060]: E0702 07:05:33.041168 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.041412 kubelet[3060]: E0702 07:05:33.041399 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.041481 kubelet[3060]: W0702 07:05:33.041413 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.041481 kubelet[3060]: E0702 07:05:33.041439 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.041748 kubelet[3060]: E0702 07:05:33.041734 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.041748 kubelet[3060]: W0702 07:05:33.041748 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.041870 kubelet[3060]: E0702 07:05:33.041774 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.041991 kubelet[3060]: E0702 07:05:33.041978 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.042061 kubelet[3060]: W0702 07:05:33.041992 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.042061 kubelet[3060]: E0702 07:05:33.042018 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.042228 kubelet[3060]: E0702 07:05:33.042216 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.042302 kubelet[3060]: W0702 07:05:33.042229 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.042302 kubelet[3060]: E0702 07:05:33.042257 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.042471 kubelet[3060]: E0702 07:05:33.042458 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.042542 kubelet[3060]: W0702 07:05:33.042472 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.042542 kubelet[3060]: E0702 07:05:33.042497 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.047088 kubelet[3060]: E0702 07:05:33.042844 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.047088 kubelet[3060]: W0702 07:05:33.042854 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.047088 kubelet[3060]: E0702 07:05:33.042880 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.047088 kubelet[3060]: E0702 07:05:33.043166 3060 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 07:05:33.047088 kubelet[3060]: W0702 07:05:33.043189 3060 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 07:05:33.047088 kubelet[3060]: E0702 07:05:33.043206 3060 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 07:05:33.328695 systemd[1]: run-containerd-runc-k8s.io-02f6508b3d1e26f47306070becee335f176e2f97520e2e7157a63179dead147c-runc.2Y3eAG.mount: Deactivated successfully. Jul 2 07:05:33.328860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02f6508b3d1e26f47306070becee335f176e2f97520e2e7157a63179dead147c-rootfs.mount: Deactivated successfully. Jul 2 07:05:33.341285 containerd[1899]: time="2024-07-02T07:05:33.296029903Z" level=info msg="shim disconnected" id=02f6508b3d1e26f47306070becee335f176e2f97520e2e7157a63179dead147c namespace=k8s.io Jul 2 07:05:33.341478 containerd[1899]: time="2024-07-02T07:05:33.341285747Z" level=warning msg="cleaning up after shim disconnected" id=02f6508b3d1e26f47306070becee335f176e2f97520e2e7157a63179dead147c namespace=k8s.io Jul 2 07:05:33.341478 containerd[1899]: time="2024-07-02T07:05:33.341306449Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 07:05:33.842127 kubelet[3060]: E0702 07:05:33.842091 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxlxj" podUID="599f2da3-47a6-4b46-afa0-843612b872b6" Jul 2 07:05:33.985257 containerd[1899]: time="2024-07-02T07:05:33.983968153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 07:05:34.006497 kubelet[3060]: I0702 07:05:34.006454 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-79f9fd9777-55tv6" podStartSLOduration=4.149719185 podCreationTimestamp="2024-07-02 07:05:27 +0000 UTC" firstStartedPulling="2024-07-02 07:05:28.460846038 +0000 UTC m=+18.825919599" lastFinishedPulling="2024-07-02 07:05:31.317527827 +0000 UTC m=+21.682601395" observedRunningTime="2024-07-02 07:05:31.987594049 +0000 UTC m=+22.352667629" watchObservedRunningTime="2024-07-02 07:05:34.006400981 +0000 UTC m=+24.371474565" Jul 2 07:05:35.854788 kubelet[3060]: E0702 07:05:35.851814 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxlxj" podUID="599f2da3-47a6-4b46-afa0-843612b872b6" Jul 2 07:05:37.844906 kubelet[3060]: E0702 07:05:37.842815 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxlxj" podUID="599f2da3-47a6-4b46-afa0-843612b872b6" Jul 2 07:05:38.870821 containerd[1899]: time="2024-07-02T07:05:38.870765887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:38.872114 containerd[1899]: time="2024-07-02T07:05:38.872058587Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 07:05:38.873680 containerd[1899]: time="2024-07-02T07:05:38.873648457Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:38.876049 containerd[1899]: time="2024-07-02T07:05:38.876021352Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:38.878531 containerd[1899]: time="2024-07-02T07:05:38.878495267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:38.879428 containerd[1899]: time="2024-07-02T07:05:38.879392152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 4.895375601s" Jul 2 07:05:38.879577 containerd[1899]: time="2024-07-02T07:05:38.879530404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 07:05:38.883877 containerd[1899]: time="2024-07-02T07:05:38.883838105Z" level=info msg="CreateContainer within sandbox \"2099c45f781cfebbcdc37fea137ba50e08d5dc271dc08c42d86e1e542cff09ae\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 07:05:38.909472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272998375.mount: Deactivated successfully. Jul 2 07:05:38.915396 containerd[1899]: time="2024-07-02T07:05:38.915349454Z" level=info msg="CreateContainer within sandbox \"2099c45f781cfebbcdc37fea137ba50e08d5dc271dc08c42d86e1e542cff09ae\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dcdea41fc071c37096d228913965031b6b928809baaf51580b957e64ffe04ea5\"" Jul 2 07:05:38.918692 containerd[1899]: time="2024-07-02T07:05:38.916028407Z" level=info msg="StartContainer for \"dcdea41fc071c37096d228913965031b6b928809baaf51580b957e64ffe04ea5\"" Jul 2 07:05:39.015306 systemd[1]: run-containerd-runc-k8s.io-dcdea41fc071c37096d228913965031b6b928809baaf51580b957e64ffe04ea5-runc.9M6Tkn.mount: Deactivated successfully. Jul 2 07:05:39.074586 containerd[1899]: time="2024-07-02T07:05:39.071875822Z" level=info msg="StartContainer for \"dcdea41fc071c37096d228913965031b6b928809baaf51580b957e64ffe04ea5\" returns successfully" Jul 2 07:05:39.841983 kubelet[3060]: E0702 07:05:39.841949 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fxlxj" podUID="599f2da3-47a6-4b46-afa0-843612b872b6" Jul 2 07:05:40.154207 kubelet[3060]: I0702 07:05:40.153857 3060 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 07:05:40.221442 kubelet[3060]: I0702 07:05:40.221392 3060 topology_manager.go:215] "Topology Admit Handler" podUID="c8048bda-b540-4626-b35e-23b1e8b8fc9d" podNamespace="kube-system" podName="coredns-5dd5756b68-c58xl" Jul 2 07:05:40.241480 kubelet[3060]: I0702 07:05:40.241449 3060 topology_manager.go:215] "Topology Admit Handler" podUID="a23e1fef-bc43-4e75-8a26-2e335205ef00" podNamespace="kube-system" podName="coredns-5dd5756b68-6v7x9" Jul 2 07:05:40.259886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcdea41fc071c37096d228913965031b6b928809baaf51580b957e64ffe04ea5-rootfs.mount: Deactivated successfully. Jul 2 07:05:40.266830 kubelet[3060]: I0702 07:05:40.262217 3060 topology_manager.go:215] "Topology Admit Handler" podUID="ed2c8cf8-b127-46b4-9250-2cadb569e047" podNamespace="calico-system" podName="calico-kube-controllers-5fc9685798-vqxr2" Jul 2 07:05:40.266830 kubelet[3060]: I0702 07:05:40.265842 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8048bda-b540-4626-b35e-23b1e8b8fc9d-config-volume\") pod \"coredns-5dd5756b68-c58xl\" (UID: \"c8048bda-b540-4626-b35e-23b1e8b8fc9d\") " pod="kube-system/coredns-5dd5756b68-c58xl" Jul 2 07:05:40.266830 kubelet[3060]: I0702 07:05:40.265924 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9c4f\" (UniqueName: \"kubernetes.io/projected/c8048bda-b540-4626-b35e-23b1e8b8fc9d-kube-api-access-t9c4f\") pod \"coredns-5dd5756b68-c58xl\" (UID: \"c8048bda-b540-4626-b35e-23b1e8b8fc9d\") " pod="kube-system/coredns-5dd5756b68-c58xl" Jul 2 07:05:40.283007 containerd[1899]: time="2024-07-02T07:05:40.282935850Z" level=info msg="shim disconnected" id=dcdea41fc071c37096d228913965031b6b928809baaf51580b957e64ffe04ea5 namespace=k8s.io Jul 2 07:05:40.283007 containerd[1899]: time="2024-07-02T07:05:40.283006279Z" level=warning msg="cleaning up after shim disconnected" id=dcdea41fc071c37096d228913965031b6b928809baaf51580b957e64ffe04ea5 namespace=k8s.io Jul 2 07:05:40.283624 containerd[1899]: time="2024-07-02T07:05:40.283017923Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 07:05:40.366845 kubelet[3060]: I0702 07:05:40.366733 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a23e1fef-bc43-4e75-8a26-2e335205ef00-config-volume\") pod \"coredns-5dd5756b68-6v7x9\" (UID: \"a23e1fef-bc43-4e75-8a26-2e335205ef00\") " pod="kube-system/coredns-5dd5756b68-6v7x9" Jul 2 07:05:40.366845 kubelet[3060]: I0702 07:05:40.366794 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nh9t\" (UniqueName: \"kubernetes.io/projected/a23e1fef-bc43-4e75-8a26-2e335205ef00-kube-api-access-8nh9t\") pod \"coredns-5dd5756b68-6v7x9\" (UID: \"a23e1fef-bc43-4e75-8a26-2e335205ef00\") " pod="kube-system/coredns-5dd5756b68-6v7x9" Jul 2 07:05:40.367411 kubelet[3060]: I0702 07:05:40.367270 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdcrz\" (UniqueName: \"kubernetes.io/projected/ed2c8cf8-b127-46b4-9250-2cadb569e047-kube-api-access-kdcrz\") pod \"calico-kube-controllers-5fc9685798-vqxr2\" (UID: \"ed2c8cf8-b127-46b4-9250-2cadb569e047\") " pod="calico-system/calico-kube-controllers-5fc9685798-vqxr2" Jul 2 07:05:40.367411 kubelet[3060]: I0702 07:05:40.367318 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed2c8cf8-b127-46b4-9250-2cadb569e047-tigera-ca-bundle\") pod \"calico-kube-controllers-5fc9685798-vqxr2\" (UID: \"ed2c8cf8-b127-46b4-9250-2cadb569e047\") " pod="calico-system/calico-kube-controllers-5fc9685798-vqxr2" Jul 2 07:05:40.536770 containerd[1899]: time="2024-07-02T07:05:40.536642347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c58xl,Uid:c8048bda-b540-4626-b35e-23b1e8b8fc9d,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:40.569034 containerd[1899]: time="2024-07-02T07:05:40.568711282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc9685798-vqxr2,Uid:ed2c8cf8-b127-46b4-9250-2cadb569e047,Namespace:calico-system,Attempt:0,}" Jul 2 07:05:40.588992 containerd[1899]: time="2024-07-02T07:05:40.588740426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6v7x9,Uid:a23e1fef-bc43-4e75-8a26-2e335205ef00,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:40.845525 containerd[1899]: time="2024-07-02T07:05:40.845299593Z" level=error msg="Failed to destroy network for sandbox \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:40.846270 containerd[1899]: time="2024-07-02T07:05:40.846211354Z" level=error msg="encountered an error cleaning up failed sandbox \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:40.847346 containerd[1899]: time="2024-07-02T07:05:40.847277765Z" level=error msg="Failed to destroy network for sandbox \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:40.848640 containerd[1899]: time="2024-07-02T07:05:40.847842101Z" level=error msg="encountered an error cleaning up failed sandbox \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:40.857957 containerd[1899]: time="2024-07-02T07:05:40.857892607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c58xl,Uid:c8048bda-b540-4626-b35e-23b1e8b8fc9d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:40.858148 containerd[1899]: time="2024-07-02T07:05:40.858108029Z" level=error msg="Failed to destroy network for sandbox \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:40.858443 containerd[1899]: time="2024-07-02T07:05:40.858405083Z" level=error msg="encountered an error cleaning up failed sandbox \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:40.858518 containerd[1899]: time="2024-07-02T07:05:40.858457478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6v7x9,Uid:a23e1fef-bc43-4e75-8a26-2e335205ef00,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:40.860421 kubelet[3060]: E0702 07:05:40.858779 3060 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:40.860421 kubelet[3060]: E0702 07:05:40.858847 3060 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-6v7x9" Jul 2 07:05:40.860421 kubelet[3060]: E0702 07:05:40.858883 3060 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-6v7x9" Jul 2 07:05:40.861016 kubelet[3060]: E0702 07:05:40.858949 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-6v7x9_kube-system(a23e1fef-bc43-4e75-8a26-2e335205ef00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-6v7x9_kube-system(a23e1fef-bc43-4e75-8a26-2e335205ef00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-6v7x9" podUID="a23e1fef-bc43-4e75-8a26-2e335205ef00" Jul 2 07:05:40.861016 kubelet[3060]: E0702 07:05:40.860162 3060 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:40.861016 kubelet[3060]: E0702 07:05:40.860294 3060 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-c58xl" Jul 2 07:05:40.861177 kubelet[3060]: E0702 07:05:40.860394 3060 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-c58xl" Jul 2 07:05:40.861177 kubelet[3060]: E0702 07:05:40.860457 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-c58xl_kube-system(c8048bda-b540-4626-b35e-23b1e8b8fc9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-c58xl_kube-system(c8048bda-b540-4626-b35e-23b1e8b8fc9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-c58xl" podUID="c8048bda-b540-4626-b35e-23b1e8b8fc9d" Jul 2 07:05:40.877380 containerd[1899]: time="2024-07-02T07:05:40.877309658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc9685798-vqxr2,Uid:ed2c8cf8-b127-46b4-9250-2cadb569e047,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:40.877779 kubelet[3060]: E0702 07:05:40.877753 3060 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:40.877898 kubelet[3060]: E0702 07:05:40.877820 3060 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fc9685798-vqxr2" Jul 2 07:05:40.877898 kubelet[3060]: E0702 07:05:40.877849 3060 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fc9685798-vqxr2" Jul 2 07:05:40.877991 kubelet[3060]: E0702 07:05:40.877917 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fc9685798-vqxr2_calico-system(ed2c8cf8-b127-46b4-9250-2cadb569e047)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fc9685798-vqxr2_calico-system(ed2c8cf8-b127-46b4-9250-2cadb569e047)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fc9685798-vqxr2" podUID="ed2c8cf8-b127-46b4-9250-2cadb569e047" Jul 2 07:05:41.076309 containerd[1899]: time="2024-07-02T07:05:41.041581987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 07:05:41.083863 kubelet[3060]: I0702 07:05:41.083829 3060 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:05:41.085303 containerd[1899]: time="2024-07-02T07:05:41.085088600Z" level=info msg="StopPodSandbox for \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\"" Jul 2 07:05:41.093444 containerd[1899]: time="2024-07-02T07:05:41.093400467Z" level=info msg="Ensure that sandbox b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814 in task-service has been cleanup successfully" Jul 2 07:05:41.095031 kubelet[3060]: I0702 07:05:41.094965 3060 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:05:41.118416 containerd[1899]: time="2024-07-02T07:05:41.117148191Z" level=info msg="StopPodSandbox for \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\"" Jul 2 07:05:41.118416 containerd[1899]: time="2024-07-02T07:05:41.118397309Z" level=info msg="Ensure that sandbox 8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b in task-service has been cleanup successfully" Jul 2 07:05:41.123585 kubelet[3060]: I0702 07:05:41.121121 3060 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:05:41.124052 containerd[1899]: time="2024-07-02T07:05:41.124010918Z" level=info msg="StopPodSandbox for \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\"" Jul 2 07:05:41.124434 containerd[1899]: time="2024-07-02T07:05:41.124330773Z" level=info msg="Ensure that sandbox f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831 in task-service has been cleanup successfully" Jul 2 07:05:41.230277 containerd[1899]: time="2024-07-02T07:05:41.230201569Z" level=error msg="StopPodSandbox for \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\" failed" error="failed to destroy network for sandbox \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:41.231172 kubelet[3060]: E0702 07:05:41.230846 3060 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:05:41.231172 kubelet[3060]: E0702 07:05:41.230964 3060 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b"} Jul 2 07:05:41.231172 kubelet[3060]: E0702 07:05:41.231033 3060 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed2c8cf8-b127-46b4-9250-2cadb569e047\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 07:05:41.231172 kubelet[3060]: E0702 07:05:41.231122 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed2c8cf8-b127-46b4-9250-2cadb569e047\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fc9685798-vqxr2" podUID="ed2c8cf8-b127-46b4-9250-2cadb569e047" Jul 2 07:05:41.236300 containerd[1899]: time="2024-07-02T07:05:41.236217366Z" level=error msg="StopPodSandbox for \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\" failed" error="failed to destroy network for sandbox \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:41.236977 kubelet[3060]: E0702 07:05:41.236734 3060 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:05:41.236977 kubelet[3060]: E0702 07:05:41.236801 3060 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831"} Jul 2 07:05:41.236977 kubelet[3060]: E0702 07:05:41.236879 3060 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a23e1fef-bc43-4e75-8a26-2e335205ef00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 07:05:41.236977 kubelet[3060]: E0702 07:05:41.236955 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a23e1fef-bc43-4e75-8a26-2e335205ef00\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-6v7x9" podUID="a23e1fef-bc43-4e75-8a26-2e335205ef00" Jul 2 07:05:41.242800 containerd[1899]: time="2024-07-02T07:05:41.242741057Z" level=error msg="StopPodSandbox for \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\" failed" error="failed to destroy network for sandbox \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:41.243133 kubelet[3060]: E0702 07:05:41.243104 3060 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:05:41.243250 kubelet[3060]: E0702 07:05:41.243157 3060 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814"} Jul 2 07:05:41.243250 kubelet[3060]: E0702 07:05:41.243206 3060 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8048bda-b540-4626-b35e-23b1e8b8fc9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 07:05:41.243385 kubelet[3060]: E0702 07:05:41.243246 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8048bda-b540-4626-b35e-23b1e8b8fc9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-c58xl" podUID="c8048bda-b540-4626-b35e-23b1e8b8fc9d" Jul 2 07:05:41.262103 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814-shm.mount: Deactivated successfully. Jul 2 07:05:41.852619 containerd[1899]: time="2024-07-02T07:05:41.852571072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxlxj,Uid:599f2da3-47a6-4b46-afa0-843612b872b6,Namespace:calico-system,Attempt:0,}" Jul 2 07:05:41.965368 containerd[1899]: time="2024-07-02T07:05:41.965309791Z" level=error msg="Failed to destroy network for sandbox \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:41.971427 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315-shm.mount: Deactivated successfully. Jul 2 07:05:41.973782 containerd[1899]: time="2024-07-02T07:05:41.973722853Z" level=error msg="encountered an error cleaning up failed sandbox \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:41.973958 containerd[1899]: time="2024-07-02T07:05:41.973809635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxlxj,Uid:599f2da3-47a6-4b46-afa0-843612b872b6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:41.974120 kubelet[3060]: E0702 07:05:41.974080 3060 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:41.975881 kubelet[3060]: E0702 07:05:41.974355 3060 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxlxj" Jul 2 07:05:41.975881 kubelet[3060]: E0702 07:05:41.974410 3060 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fxlxj" Jul 2 07:05:41.975881 kubelet[3060]: E0702 07:05:41.974497 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fxlxj_calico-system(599f2da3-47a6-4b46-afa0-843612b872b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fxlxj_calico-system(599f2da3-47a6-4b46-afa0-843612b872b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxlxj" podUID="599f2da3-47a6-4b46-afa0-843612b872b6" Jul 2 07:05:42.125724 kubelet[3060]: I0702 07:05:42.125019 3060 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:05:42.127750 containerd[1899]: time="2024-07-02T07:05:42.126183301Z" level=info msg="StopPodSandbox for \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\"" Jul 2 07:05:42.127750 containerd[1899]: time="2024-07-02T07:05:42.126427398Z" level=info msg="Ensure that sandbox a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315 in task-service has been cleanup successfully" Jul 2 07:05:42.188360 containerd[1899]: time="2024-07-02T07:05:42.188297098Z" level=error msg="StopPodSandbox for \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\" failed" error="failed to destroy network for sandbox \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 07:05:42.189089 kubelet[3060]: E0702 07:05:42.188860 3060 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:05:42.189089 kubelet[3060]: E0702 07:05:42.188925 3060 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315"} Jul 2 07:05:42.189089 kubelet[3060]: E0702 07:05:42.188978 3060 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"599f2da3-47a6-4b46-afa0-843612b872b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 07:05:42.189089 kubelet[3060]: E0702 07:05:42.189041 3060 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"599f2da3-47a6-4b46-afa0-843612b872b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fxlxj" podUID="599f2da3-47a6-4b46-afa0-843612b872b6" Jul 2 07:05:44.954048 kubelet[3060]: I0702 07:05:44.953647 3060 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 07:05:45.050747 kernel: kauditd_printk_skb: 8 callbacks suppressed Jul 2 07:05:45.052413 kernel: audit: type=1325 audit(1719903945.048:277): table=filter:95 family=2 entries=15 op=nft_register_rule pid=4126 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:45.048000 audit[4126]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=4126 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:45.048000 audit[4126]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffccb40c5c0 a2=0 a3=7ffccb40c5ac items=0 ppid=3296 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:45.059272 kernel: audit: type=1300 audit(1719903945.048:277): arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffccb40c5c0 a2=0 a3=7ffccb40c5ac items=0 ppid=3296 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:45.059383 kernel: audit: type=1327 audit(1719903945.048:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:45.048000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:45.060000 audit[4126]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=4126 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:45.068175 kernel: audit: type=1325 audit(1719903945.060:278): table=nat:96 family=2 entries=19 op=nft_register_chain pid=4126 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:45.068276 kernel: audit: type=1300 audit(1719903945.060:278): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffccb40c5c0 a2=0 a3=7ffccb40c5ac items=0 ppid=3296 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:45.068314 kernel: audit: type=1327 audit(1719903945.060:278): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:45.060000 audit[4126]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffccb40c5c0 a2=0 a3=7ffccb40c5ac items=0 ppid=3296 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:45.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:47.447729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017213071.mount: Deactivated successfully. Jul 2 07:05:47.498933 containerd[1899]: time="2024-07-02T07:05:47.498887039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:47.500424 containerd[1899]: time="2024-07-02T07:05:47.500356686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 07:05:47.503018 containerd[1899]: time="2024-07-02T07:05:47.502602221Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:47.504691 containerd[1899]: time="2024-07-02T07:05:47.504653469Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:47.507475 containerd[1899]: time="2024-07-02T07:05:47.506928823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:47.508420 containerd[1899]: time="2024-07-02T07:05:47.508384118Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 6.466751341s" Jul 2 07:05:47.508584 containerd[1899]: time="2024-07-02T07:05:47.508537563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 07:05:47.562461 containerd[1899]: time="2024-07-02T07:05:47.559958475Z" level=info msg="CreateContainer within sandbox \"2099c45f781cfebbcdc37fea137ba50e08d5dc271dc08c42d86e1e542cff09ae\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 07:05:47.594624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount820052799.mount: Deactivated successfully. Jul 2 07:05:47.601164 containerd[1899]: time="2024-07-02T07:05:47.601057580Z" level=info msg="CreateContainer within sandbox \"2099c45f781cfebbcdc37fea137ba50e08d5dc271dc08c42d86e1e542cff09ae\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c90bcbb7145693bae1e5b78ef3b77fe2a21d5c45eb668017126ac8d52bd423e2\"" Jul 2 07:05:47.622272 containerd[1899]: time="2024-07-02T07:05:47.622224884Z" level=info msg="StartContainer for \"c90bcbb7145693bae1e5b78ef3b77fe2a21d5c45eb668017126ac8d52bd423e2\"" Jul 2 07:05:47.790357 containerd[1899]: time="2024-07-02T07:05:47.790243114Z" level=info msg="StartContainer for \"c90bcbb7145693bae1e5b78ef3b77fe2a21d5c45eb668017126ac8d52bd423e2\" returns successfully" Jul 2 07:05:47.908371 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 07:05:47.908547 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 07:05:49.204983 systemd[1]: run-containerd-runc-k8s.io-c90bcbb7145693bae1e5b78ef3b77fe2a21d5c45eb668017126ac8d52bd423e2-runc.kYzC0o.mount: Deactivated successfully. Jul 2 07:05:49.567932 kernel: audit: type=1400 audit(1719903949.557:279): avc: denied { write } for pid=4281 comm="tee" name="fd" dev="proc" ino=26196 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:05:49.568082 kernel: audit: type=1300 audit(1719903949.557:279): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe05855a14 a2=241 a3=1b6 items=1 ppid=4255 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:49.557000 audit[4281]: AVC avc: denied { write } for pid=4281 comm="tee" name="fd" dev="proc" ino=26196 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:05:49.557000 audit[4281]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe05855a14 a2=241 a3=1b6 items=1 ppid=4255 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:49.570572 kernel: audit: type=1307 audit(1719903949.557:279): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 2 07:05:49.557000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 2 07:05:49.557000 audit: PATH item=0 name="/dev/fd/63" inode=25474 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:05:49.576752 kernel: audit: type=1302 audit(1719903949.557:279): item=0 name="/dev/fd/63" inode=25474 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:05:49.557000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:05:49.616000 audit[4314]: AVC avc: denied { write } for pid=4314 comm="tee" name="fd" dev="proc" ino=25505 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:05:49.616000 audit[4314]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd41889a24 a2=241 a3=1b6 items=1 ppid=4246 pid=4314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:49.616000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 2 07:05:49.616000 audit: PATH item=0 name="/dev/fd/63" inode=26212 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:05:49.616000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:05:49.621000 audit[4296]: AVC avc: denied { write } for pid=4296 comm="tee" name="fd" dev="proc" ino=26215 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:05:49.621000 audit[4296]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc407b0a24 a2=241 a3=1b6 items=1 ppid=4257 pid=4296 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:49.621000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 2 07:05:49.621000 audit: PATH item=0 name="/dev/fd/63" inode=25495 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:05:49.621000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:05:49.644000 audit[4306]: AVC avc: denied { write } for pid=4306 comm="tee" name="fd" dev="proc" ino=26221 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:05:49.644000 audit[4306]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdefdfda26 a2=241 a3=1b6 items=1 ppid=4249 pid=4306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:49.644000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 2 07:05:49.644000 audit: PATH item=0 name="/dev/fd/63" inode=26210 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:05:49.644000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:05:49.655000 audit[4299]: AVC avc: denied { write } for pid=4299 comm="tee" name="fd" dev="proc" ino=25515 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:05:49.661000 audit[4311]: AVC avc: denied { write } for pid=4311 comm="tee" name="fd" dev="proc" ino=25519 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:05:49.655000 audit[4299]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc8587ca24 a2=241 a3=1b6 items=1 ppid=4252 pid=4299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:49.655000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 2 07:05:49.655000 audit: PATH item=0 name="/dev/fd/63" inode=25498 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:05:49.655000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:05:49.661000 audit[4311]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc21cffa25 a2=241 a3=1b6 items=1 ppid=4253 pid=4311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:49.661000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 2 07:05:49.661000 audit: PATH item=0 name="/dev/fd/63" inode=26211 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:05:49.661000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:05:49.698000 audit[4324]: AVC avc: denied { write } for pid=4324 comm="tee" name="fd" dev="proc" ino=25524 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 07:05:49.698000 audit[4324]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdef52ca15 a2=241 a3=1b6 items=1 ppid=4254 pid=4324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:49.698000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 2 07:05:49.698000 audit: PATH item=0 name="/dev/fd/63" inode=25516 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:05:49.698000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 07:05:50.215096 systemd[1]: run-containerd-runc-k8s.io-c90bcbb7145693bae1e5b78ef3b77fe2a21d5c45eb668017126ac8d52bd423e2-runc.KtldYl.mount: Deactivated successfully. Jul 2 07:05:50.669739 systemd-networkd[1583]: vxlan.calico: Link UP Jul 2 07:05:50.669747 systemd-networkd[1583]: vxlan.calico: Gained carrier Jul 2 07:05:50.677484 (udev-worker)[4171]: Network interface NamePolicy= disabled on kernel command line. Jul 2 07:05:50.809020 (udev-worker)[4404]: Network interface NamePolicy= disabled on kernel command line. Jul 2 07:05:50.846363 kernel: kauditd_printk_skb: 31 callbacks suppressed Jul 2 07:05:50.846516 kernel: audit: type=1334 audit(1719903950.835:286): prog-id=10 op=LOAD Jul 2 07:05:50.846568 kernel: audit: type=1300 audit(1719903950.835:286): arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0ba6a3d0 a2=70 a3=7f9ab2890000 items=0 ppid=4247 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:50.846638 kernel: audit: type=1327 audit(1719903950.835:286): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:05:50.835000 audit: BPF prog-id=10 op=LOAD Jul 2 07:05:50.835000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0ba6a3d0 a2=70 a3=7f9ab2890000 items=0 ppid=4247 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:50.835000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:05:50.854760 kernel: audit: type=1334 audit(1719903950.835:287): prog-id=10 op=UNLOAD Jul 2 07:05:50.854930 kernel: audit: type=1334 audit(1719903950.835:288): prog-id=11 op=LOAD Jul 2 07:05:50.854969 kernel: audit: type=1300 audit(1719903950.835:288): arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0ba6a3d0 a2=70 a3=6f items=0 ppid=4247 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:50.835000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:05:50.835000 audit: BPF prog-id=11 op=LOAD Jul 2 07:05:50.835000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc0ba6a3d0 a2=70 a3=6f items=0 ppid=4247 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:50.869371 kernel: audit: type=1327 audit(1719903950.835:288): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:05:50.869494 kernel: audit: type=1334 audit(1719903950.835:289): prog-id=11 op=UNLOAD Jul 2 07:05:50.869534 kernel: audit: type=1334 audit(1719903950.835:290): prog-id=12 op=LOAD Jul 2 07:05:50.869581 kernel: audit: type=1300 audit(1719903950.835:290): arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc0ba6a360 a2=70 a3=7ffc0ba6a3d0 items=0 ppid=4247 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:50.835000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:05:50.835000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:05:50.835000 audit: BPF prog-id=12 op=LOAD Jul 2 07:05:50.835000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc0ba6a360 a2=70 a3=7ffc0ba6a3d0 items=0 ppid=4247 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:50.835000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:05:50.836000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:05:50.837000 audit: BPF prog-id=13 op=LOAD Jul 2 07:05:50.837000 audit[4409]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc0ba6a390 a2=70 a3=0 items=0 ppid=4247 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:50.837000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 07:05:50.891000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:05:50.996000 audit[4438]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=4438 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:05:50.996000 audit[4438]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff887a90b0 a2=0 a3=7fff887a909c items=0 ppid=4247 pid=4438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:50.996000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:05:51.008000 audit[4437]: NETFILTER_CFG table=raw:98 family=2 entries=19 op=nft_register_chain pid=4437 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:05:51.008000 audit[4437]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7ffc93be9a60 a2=0 a3=7ffc93be9a4c items=0 ppid=4247 pid=4437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:51.008000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:05:51.008000 audit[4436]: NETFILTER_CFG table=nat:99 family=2 entries=15 op=nft_register_chain pid=4436 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:05:51.008000 audit[4436]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffee3f79300 a2=0 a3=55b140435000 items=0 ppid=4247 pid=4436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:51.008000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:05:51.009000 audit[4439]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=4439 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:05:51.009000 audit[4439]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffe40ec0d40 a2=0 a3=7ffe40ec0d2c items=0 ppid=4247 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:51.009000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:05:51.759740 systemd-networkd[1583]: vxlan.calico: Gained IPv6LL Jul 2 07:05:53.851545 containerd[1899]: time="2024-07-02T07:05:53.849012362Z" level=info msg="StopPodSandbox for \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\"" Jul 2 07:05:53.945976 kubelet[3060]: I0702 07:05:53.945891 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-qmr4t" podStartSLOduration=6.910758678 podCreationTimestamp="2024-07-02 07:05:28 +0000 UTC" firstStartedPulling="2024-07-02 07:05:28.480238867 +0000 UTC m=+18.845312429" lastFinishedPulling="2024-07-02 07:05:47.508994633 +0000 UTC m=+37.874068200" observedRunningTime="2024-07-02 07:05:48.175211988 +0000 UTC m=+38.540285568" watchObservedRunningTime="2024-07-02 07:05:53.939514449 +0000 UTC m=+44.304588030" Jul 2 07:05:54.278642 containerd[1899]: 2024-07-02 07:05:53.930 [INFO][4465] k8s.go 608: Cleaning up netns ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:05:54.278642 containerd[1899]: 2024-07-02 07:05:53.932 [INFO][4465] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" iface="eth0" netns="/var/run/netns/cni-e6f2512f-5f10-f57f-c6d8-e1ac4947076d" Jul 2 07:05:54.278642 containerd[1899]: 2024-07-02 07:05:53.932 [INFO][4465] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" iface="eth0" netns="/var/run/netns/cni-e6f2512f-5f10-f57f-c6d8-e1ac4947076d" Jul 2 07:05:54.278642 containerd[1899]: 2024-07-02 07:05:53.933 [INFO][4465] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" iface="eth0" netns="/var/run/netns/cni-e6f2512f-5f10-f57f-c6d8-e1ac4947076d" Jul 2 07:05:54.278642 containerd[1899]: 2024-07-02 07:05:53.933 [INFO][4465] k8s.go 615: Releasing IP address(es) ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:05:54.278642 containerd[1899]: 2024-07-02 07:05:53.933 [INFO][4465] utils.go 188: Calico CNI releasing IP address ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:05:54.278642 containerd[1899]: 2024-07-02 07:05:54.249 [INFO][4471] ipam_plugin.go 411: Releasing address using handleID ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" HandleID="k8s-pod-network.b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:05:54.278642 containerd[1899]: 2024-07-02 07:05:54.252 [INFO][4471] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:05:54.278642 containerd[1899]: 2024-07-02 07:05:54.253 [INFO][4471] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:05:54.278642 containerd[1899]: 2024-07-02 07:05:54.271 [WARNING][4471] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" HandleID="k8s-pod-network.b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:05:54.278642 containerd[1899]: 2024-07-02 07:05:54.272 [INFO][4471] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" HandleID="k8s-pod-network.b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:05:54.278642 containerd[1899]: 2024-07-02 07:05:54.274 [INFO][4471] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:05:54.278642 containerd[1899]: 2024-07-02 07:05:54.276 [INFO][4465] k8s.go 621: Teardown processing complete. ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:05:54.287234 containerd[1899]: time="2024-07-02T07:05:54.284250941Z" level=info msg="TearDown network for sandbox \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\" successfully" Jul 2 07:05:54.287234 containerd[1899]: time="2024-07-02T07:05:54.284296760Z" level=info msg="StopPodSandbox for \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\" returns successfully" Jul 2 07:05:54.287234 containerd[1899]: time="2024-07-02T07:05:54.285381747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c58xl,Uid:c8048bda-b540-4626-b35e-23b1e8b8fc9d,Namespace:kube-system,Attempt:1,}" Jul 2 07:05:54.284351 systemd[1]: run-netns-cni\x2de6f2512f\x2d5f10\x2df57f\x2dc6d8\x2de1ac4947076d.mount: Deactivated successfully. Jul 2 07:05:54.541607 systemd-networkd[1583]: cali67af87431b3: Link UP Jul 2 07:05:54.542512 (udev-worker)[4496]: Network interface NamePolicy= disabled on kernel command line. Jul 2 07:05:54.546729 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:05:54.546827 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali67af87431b3: link becomes ready Jul 2 07:05:54.545895 systemd-networkd[1583]: cali67af87431b3: Gained carrier Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.399 [INFO][4477] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0 coredns-5dd5756b68- kube-system c8048bda-b540-4626-b35e-23b1e8b8fc9d 678 0 2024-07-02 07:05:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-147 coredns-5dd5756b68-c58xl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali67af87431b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" Namespace="kube-system" Pod="coredns-5dd5756b68-c58xl" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-" Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.400 [INFO][4477] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" Namespace="kube-system" Pod="coredns-5dd5756b68-c58xl" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.488 [INFO][4489] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" HandleID="k8s-pod-network.6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.497 [INFO][4489] ipam_plugin.go 264: Auto assigning IP ContainerID="6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" HandleID="k8s-pod-network.6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef0b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-147", "pod":"coredns-5dd5756b68-c58xl", "timestamp":"2024-07-02 07:05:54.488055305 +0000 UTC"}, Hostname:"ip-172-31-25-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.497 [INFO][4489] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.497 [INFO][4489] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.497 [INFO][4489] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-147' Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.500 [INFO][4489] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" host="ip-172-31-25-147" Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.507 [INFO][4489] ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-147" Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.511 [INFO][4489] ipam.go 489: Trying affinity for 192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.513 [INFO][4489] ipam.go 155: Attempting to load block cidr=192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.516 [INFO][4489] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.516 [INFO][4489] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.0/26 handle="k8s-pod-network.6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" host="ip-172-31-25-147" Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.518 [INFO][4489] ipam.go 1685: Creating new handle: k8s-pod-network.6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9 Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.525 [INFO][4489] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.0/26 handle="k8s-pod-network.6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" host="ip-172-31-25-147" Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.531 [INFO][4489] ipam.go 1216: Successfully claimed IPs: [192.168.14.1/26] block=192.168.14.0/26 handle="k8s-pod-network.6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" host="ip-172-31-25-147" Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.532 [INFO][4489] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.1/26] handle="k8s-pod-network.6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" host="ip-172-31-25-147" Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.532 [INFO][4489] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:05:54.567653 containerd[1899]: 2024-07-02 07:05:54.532 [INFO][4489] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.1/26] IPv6=[] ContainerID="6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" HandleID="k8s-pod-network.6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:05:54.568493 containerd[1899]: 2024-07-02 07:05:54.535 [INFO][4477] k8s.go 386: Populated endpoint ContainerID="6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" Namespace="kube-system" Pod="coredns-5dd5756b68-c58xl" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c8048bda-b540-4626-b35e-23b1e8b8fc9d", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"", Pod:"coredns-5dd5756b68-c58xl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67af87431b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:05:54.568493 containerd[1899]: 2024-07-02 07:05:54.536 [INFO][4477] k8s.go 387: Calico CNI using IPs: [192.168.14.1/32] ContainerID="6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" Namespace="kube-system" Pod="coredns-5dd5756b68-c58xl" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:05:54.568493 containerd[1899]: 2024-07-02 07:05:54.536 [INFO][4477] dataplane_linux.go 68: Setting the host side veth name to cali67af87431b3 ContainerID="6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" Namespace="kube-system" Pod="coredns-5dd5756b68-c58xl" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:05:54.568493 containerd[1899]: 2024-07-02 07:05:54.547 [INFO][4477] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" Namespace="kube-system" Pod="coredns-5dd5756b68-c58xl" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:05:54.568493 containerd[1899]: 2024-07-02 07:05:54.547 [INFO][4477] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" Namespace="kube-system" Pod="coredns-5dd5756b68-c58xl" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c8048bda-b540-4626-b35e-23b1e8b8fc9d", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9", Pod:"coredns-5dd5756b68-c58xl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67af87431b3", MAC:"ca:2a:c8:0c:bc:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:05:54.568892 containerd[1899]: 2024-07-02 07:05:54.564 [INFO][4477] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9" Namespace="kube-system" Pod="coredns-5dd5756b68-c58xl" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:05:54.597000 audit[4511]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=4511 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:05:54.597000 audit[4511]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7ffe31b80240 a2=0 a3=7ffe31b8022c items=0 ppid=4247 pid=4511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:54.597000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:05:54.616377 containerd[1899]: time="2024-07-02T07:05:54.616298192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:54.616713 containerd[1899]: time="2024-07-02T07:05:54.616356174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:54.616713 containerd[1899]: time="2024-07-02T07:05:54.616385656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:54.616713 containerd[1899]: time="2024-07-02T07:05:54.616399033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:54.698249 containerd[1899]: time="2024-07-02T07:05:54.698202975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-c58xl,Uid:c8048bda-b540-4626-b35e-23b1e8b8fc9d,Namespace:kube-system,Attempt:1,} returns sandbox id \"6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9\"" Jul 2 07:05:54.709766 containerd[1899]: time="2024-07-02T07:05:54.709732949Z" level=info msg="CreateContainer within sandbox \"6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:05:54.742873 containerd[1899]: time="2024-07-02T07:05:54.742817531Z" level=info msg="CreateContainer within sandbox \"6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fece47add9a364f2350105d6786ec1c59822de28bc0e0f8979fdfb9866bb6815\"" Jul 2 07:05:54.743468 containerd[1899]: time="2024-07-02T07:05:54.743404737Z" level=info msg="StartContainer for \"fece47add9a364f2350105d6786ec1c59822de28bc0e0f8979fdfb9866bb6815\"" Jul 2 07:05:54.819019 containerd[1899]: time="2024-07-02T07:05:54.817274934Z" level=info msg="StartContainer for \"fece47add9a364f2350105d6786ec1c59822de28bc0e0f8979fdfb9866bb6815\" returns successfully" Jul 2 07:05:54.843242 containerd[1899]: time="2024-07-02T07:05:54.843052558Z" level=info msg="StopPodSandbox for \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\"" Jul 2 07:05:54.945770 containerd[1899]: 2024-07-02 07:05:54.903 [INFO][4602] k8s.go 608: Cleaning up netns ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:05:54.945770 containerd[1899]: 2024-07-02 07:05:54.903 [INFO][4602] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" iface="eth0" netns="/var/run/netns/cni-1cddb6dc-0bc2-f286-7bee-b87c172ca9fe" Jul 2 07:05:54.945770 containerd[1899]: 2024-07-02 07:05:54.903 [INFO][4602] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" iface="eth0" netns="/var/run/netns/cni-1cddb6dc-0bc2-f286-7bee-b87c172ca9fe" Jul 2 07:05:54.945770 containerd[1899]: 2024-07-02 07:05:54.904 [INFO][4602] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" iface="eth0" netns="/var/run/netns/cni-1cddb6dc-0bc2-f286-7bee-b87c172ca9fe" Jul 2 07:05:54.945770 containerd[1899]: 2024-07-02 07:05:54.904 [INFO][4602] k8s.go 615: Releasing IP address(es) ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:05:54.945770 containerd[1899]: 2024-07-02 07:05:54.904 [INFO][4602] utils.go 188: Calico CNI releasing IP address ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:05:54.945770 containerd[1899]: 2024-07-02 07:05:54.933 [INFO][4612] ipam_plugin.go 411: Releasing address using handleID ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" HandleID="k8s-pod-network.8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Workload="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:05:54.945770 containerd[1899]: 2024-07-02 07:05:54.933 [INFO][4612] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:05:54.945770 containerd[1899]: 2024-07-02 07:05:54.933 [INFO][4612] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:05:54.945770 containerd[1899]: 2024-07-02 07:05:54.939 [WARNING][4612] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" HandleID="k8s-pod-network.8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Workload="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:05:54.945770 containerd[1899]: 2024-07-02 07:05:54.940 [INFO][4612] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" HandleID="k8s-pod-network.8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Workload="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:05:54.945770 containerd[1899]: 2024-07-02 07:05:54.941 [INFO][4612] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:05:54.945770 containerd[1899]: 2024-07-02 07:05:54.943 [INFO][4602] k8s.go 621: Teardown processing complete. ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:05:54.946866 containerd[1899]: time="2024-07-02T07:05:54.946167472Z" level=info msg="TearDown network for sandbox \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\" successfully" Jul 2 07:05:54.946866 containerd[1899]: time="2024-07-02T07:05:54.946206493Z" level=info msg="StopPodSandbox for \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\" returns successfully" Jul 2 07:05:54.947301 containerd[1899]: time="2024-07-02T07:05:54.947273147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc9685798-vqxr2,Uid:ed2c8cf8-b127-46b4-9250-2cadb569e047,Namespace:calico-system,Attempt:1,}" Jul 2 07:05:55.104831 (udev-worker)[4498]: Network interface NamePolicy= disabled on kernel command line. Jul 2 07:05:55.105815 systemd-networkd[1583]: cali56cee1158e7: Link UP Jul 2 07:05:55.108689 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali56cee1158e7: link becomes ready Jul 2 07:05:55.108396 systemd-networkd[1583]: cali56cee1158e7: Gained carrier Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.010 [INFO][4619] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0 calico-kube-controllers-5fc9685798- calico-system ed2c8cf8-b127-46b4-9250-2cadb569e047 690 0 2024-07-02 07:05:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fc9685798 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-25-147 calico-kube-controllers-5fc9685798-vqxr2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali56cee1158e7 [] []}} ContainerID="04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" Namespace="calico-system" Pod="calico-kube-controllers-5fc9685798-vqxr2" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-" Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.011 [INFO][4619] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" Namespace="calico-system" Pod="calico-kube-controllers-5fc9685798-vqxr2" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.057 [INFO][4630] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" HandleID="k8s-pod-network.04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" Workload="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.067 [INFO][4630] ipam_plugin.go 264: Auto assigning IP ContainerID="04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" HandleID="k8s-pod-network.04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" Workload="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318770), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-147", "pod":"calico-kube-controllers-5fc9685798-vqxr2", "timestamp":"2024-07-02 07:05:55.057485318 +0000 UTC"}, Hostname:"ip-172-31-25-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.067 [INFO][4630] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.067 [INFO][4630] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.067 [INFO][4630] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-147' Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.069 [INFO][4630] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" host="ip-172-31-25-147" Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.073 [INFO][4630] ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-147" Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.080 [INFO][4630] ipam.go 489: Trying affinity for 192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.083 [INFO][4630] ipam.go 155: Attempting to load block cidr=192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.086 [INFO][4630] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.086 [INFO][4630] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.0/26 handle="k8s-pod-network.04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" host="ip-172-31-25-147" Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.088 [INFO][4630] ipam.go 1685: Creating new handle: k8s-pod-network.04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022 Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.092 [INFO][4630] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.0/26 handle="k8s-pod-network.04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" host="ip-172-31-25-147" Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.098 [INFO][4630] ipam.go 1216: Successfully claimed IPs: [192.168.14.2/26] block=192.168.14.0/26 handle="k8s-pod-network.04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" host="ip-172-31-25-147" Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.099 [INFO][4630] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.2/26] handle="k8s-pod-network.04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" host="ip-172-31-25-147" Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.099 [INFO][4630] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:05:55.153751 containerd[1899]: 2024-07-02 07:05:55.099 [INFO][4630] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.2/26] IPv6=[] ContainerID="04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" HandleID="k8s-pod-network.04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" Workload="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:05:55.155889 containerd[1899]: 2024-07-02 07:05:55.101 [INFO][4619] k8s.go 386: Populated endpoint ContainerID="04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" Namespace="calico-system" Pod="calico-kube-controllers-5fc9685798-vqxr2" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0", GenerateName:"calico-kube-controllers-5fc9685798-", Namespace:"calico-system", SelfLink:"", UID:"ed2c8cf8-b127-46b4-9250-2cadb569e047", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fc9685798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"", Pod:"calico-kube-controllers-5fc9685798-vqxr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56cee1158e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:05:55.155889 containerd[1899]: 2024-07-02 07:05:55.101 [INFO][4619] k8s.go 387: Calico CNI using IPs: [192.168.14.2/32] ContainerID="04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" Namespace="calico-system" Pod="calico-kube-controllers-5fc9685798-vqxr2" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:05:55.155889 containerd[1899]: 2024-07-02 07:05:55.101 [INFO][4619] dataplane_linux.go 68: Setting the host side veth name to cali56cee1158e7 ContainerID="04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" Namespace="calico-system" Pod="calico-kube-controllers-5fc9685798-vqxr2" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:05:55.155889 containerd[1899]: 2024-07-02 07:05:55.109 [INFO][4619] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" Namespace="calico-system" Pod="calico-kube-controllers-5fc9685798-vqxr2" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:05:55.155889 containerd[1899]: 2024-07-02 07:05:55.110 [INFO][4619] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" Namespace="calico-system" Pod="calico-kube-controllers-5fc9685798-vqxr2" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0", GenerateName:"calico-kube-controllers-5fc9685798-", Namespace:"calico-system", SelfLink:"", UID:"ed2c8cf8-b127-46b4-9250-2cadb569e047", ResourceVersion:"690", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fc9685798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022", Pod:"calico-kube-controllers-5fc9685798-vqxr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56cee1158e7", MAC:"6a:16:22:95:7b:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:05:55.155889 containerd[1899]: 2024-07-02 07:05:55.150 [INFO][4619] k8s.go 500: Wrote updated endpoint to datastore ContainerID="04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022" Namespace="calico-system" Pod="calico-kube-controllers-5fc9685798-vqxr2" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:05:55.164000 audit[4649]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=4649 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:05:55.164000 audit[4649]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7fff96f909c0 a2=0 a3=7fff96f909ac items=0 ppid=4247 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:55.164000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:05:55.198363 containerd[1899]: time="2024-07-02T07:05:55.198184781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:55.198363 containerd[1899]: time="2024-07-02T07:05:55.198249449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:55.198841 containerd[1899]: time="2024-07-02T07:05:55.198647148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:55.198841 containerd[1899]: time="2024-07-02T07:05:55.198674991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:55.236000 audit[4684]: NETFILTER_CFG table=filter:103 family=2 entries=14 op=nft_register_rule pid=4684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:55.236000 audit[4684]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffff4ede5e0 a2=0 a3=7ffff4ede5cc items=0 ppid=3296 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:55.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:55.237000 audit[4684]: NETFILTER_CFG table=nat:104 family=2 entries=14 op=nft_register_rule pid=4684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:55.237000 audit[4684]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffff4ede5e0 a2=0 a3=0 items=0 ppid=3296 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:55.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:55.294637 systemd[1]: run-netns-cni\x2d1cddb6dc\x2d0bc2\x2df286\x2d7bee\x2db87c172ca9fe.mount: Deactivated successfully. Jul 2 07:05:55.317936 containerd[1899]: time="2024-07-02T07:05:55.317536439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc9685798-vqxr2,Uid:ed2c8cf8-b127-46b4-9250-2cadb569e047,Namespace:calico-system,Attempt:1,} returns sandbox id \"04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022\"" Jul 2 07:05:55.361225 containerd[1899]: time="2024-07-02T07:05:55.359717525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 07:05:55.845839 containerd[1899]: time="2024-07-02T07:05:55.844420111Z" level=info msg="StopPodSandbox for \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\"" Jul 2 07:05:55.848509 containerd[1899]: time="2024-07-02T07:05:55.847507563Z" level=info msg="StopPodSandbox for \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\"" Jul 2 07:05:55.938261 kubelet[3060]: I0702 07:05:55.937146 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-c58xl" podStartSLOduration=34.937028056 podCreationTimestamp="2024-07-02 07:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:05:55.212376723 +0000 UTC m=+45.577450304" watchObservedRunningTime="2024-07-02 07:05:55.937028056 +0000 UTC m=+46.302101638" Jul 2 07:05:56.075283 containerd[1899]: 2024-07-02 07:05:55.939 [INFO][4729] k8s.go 608: Cleaning up netns ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:05:56.075283 containerd[1899]: 2024-07-02 07:05:55.940 [INFO][4729] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" iface="eth0" netns="/var/run/netns/cni-7db00abf-9e18-4fe8-f455-05e5580c514a" Jul 2 07:05:56.075283 containerd[1899]: 2024-07-02 07:05:55.940 [INFO][4729] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" iface="eth0" netns="/var/run/netns/cni-7db00abf-9e18-4fe8-f455-05e5580c514a" Jul 2 07:05:56.075283 containerd[1899]: 2024-07-02 07:05:55.940 [INFO][4729] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" iface="eth0" netns="/var/run/netns/cni-7db00abf-9e18-4fe8-f455-05e5580c514a" Jul 2 07:05:56.075283 containerd[1899]: 2024-07-02 07:05:55.941 [INFO][4729] k8s.go 615: Releasing IP address(es) ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:05:56.075283 containerd[1899]: 2024-07-02 07:05:55.941 [INFO][4729] utils.go 188: Calico CNI releasing IP address ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:05:56.075283 containerd[1899]: 2024-07-02 07:05:56.031 [INFO][4741] ipam_plugin.go 411: Releasing address using handleID ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" HandleID="k8s-pod-network.a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Workload="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:05:56.075283 containerd[1899]: 2024-07-02 07:05:56.031 [INFO][4741] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:05:56.075283 containerd[1899]: 2024-07-02 07:05:56.031 [INFO][4741] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:05:56.075283 containerd[1899]: 2024-07-02 07:05:56.056 [WARNING][4741] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" HandleID="k8s-pod-network.a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Workload="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:05:56.075283 containerd[1899]: 2024-07-02 07:05:56.056 [INFO][4741] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" HandleID="k8s-pod-network.a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Workload="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:05:56.075283 containerd[1899]: 2024-07-02 07:05:56.069 [INFO][4741] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:05:56.075283 containerd[1899]: 2024-07-02 07:05:56.072 [INFO][4729] k8s.go 621: Teardown processing complete. ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:05:56.086041 systemd[1]: run-netns-cni\x2d7db00abf\x2d9e18\x2d4fe8\x2df455\x2d05e5580c514a.mount: Deactivated successfully. Jul 2 07:05:56.090386 containerd[1899]: time="2024-07-02T07:05:56.090332844Z" level=info msg="TearDown network for sandbox \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\" successfully" Jul 2 07:05:56.091793 containerd[1899]: time="2024-07-02T07:05:56.090937781Z" level=info msg="StopPodSandbox for \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\" returns successfully" Jul 2 07:05:56.093070 containerd[1899]: time="2024-07-02T07:05:56.093001764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxlxj,Uid:599f2da3-47a6-4b46-afa0-843612b872b6,Namespace:calico-system,Attempt:1,}" Jul 2 07:05:56.272749 kernel: kauditd_printk_skb: 30 callbacks suppressed Jul 2 07:05:56.272890 kernel: audit: type=1130 audit(1719903956.167:302): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.25.147:22-139.178.89.65:48894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:05:56.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.25.147:22-139.178.89.65:48894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:05:56.273495 containerd[1899]: 2024-07-02 07:05:55.939 [INFO][4730] k8s.go 608: Cleaning up netns ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:05:56.273495 containerd[1899]: 2024-07-02 07:05:55.940 [INFO][4730] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" iface="eth0" netns="/var/run/netns/cni-692f4fad-94e0-ecd5-e737-92bdec4d3ce9" Jul 2 07:05:56.273495 containerd[1899]: 2024-07-02 07:05:55.940 [INFO][4730] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" iface="eth0" netns="/var/run/netns/cni-692f4fad-94e0-ecd5-e737-92bdec4d3ce9" Jul 2 07:05:56.273495 containerd[1899]: 2024-07-02 07:05:55.941 [INFO][4730] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" iface="eth0" netns="/var/run/netns/cni-692f4fad-94e0-ecd5-e737-92bdec4d3ce9" Jul 2 07:05:56.273495 containerd[1899]: 2024-07-02 07:05:55.941 [INFO][4730] k8s.go 615: Releasing IP address(es) ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:05:56.273495 containerd[1899]: 2024-07-02 07:05:55.941 [INFO][4730] utils.go 188: Calico CNI releasing IP address ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:05:56.273495 containerd[1899]: 2024-07-02 07:05:56.041 [INFO][4742] ipam_plugin.go 411: Releasing address using handleID ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" HandleID="k8s-pod-network.f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:05:56.273495 containerd[1899]: 2024-07-02 07:05:56.042 [INFO][4742] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:05:56.273495 containerd[1899]: 2024-07-02 07:05:56.069 [INFO][4742] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:05:56.273495 containerd[1899]: 2024-07-02 07:05:56.112 [WARNING][4742] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" HandleID="k8s-pod-network.f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:05:56.273495 containerd[1899]: 2024-07-02 07:05:56.113 [INFO][4742] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" HandleID="k8s-pod-network.f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:05:56.273495 containerd[1899]: 2024-07-02 07:05:56.130 [INFO][4742] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:05:56.273495 containerd[1899]: 2024-07-02 07:05:56.163 [INFO][4730] k8s.go 621: Teardown processing complete. ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:05:56.273495 containerd[1899]: time="2024-07-02T07:05:56.172260891Z" level=info msg="TearDown network for sandbox \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\" successfully" Jul 2 07:05:56.273495 containerd[1899]: time="2024-07-02T07:05:56.172299416Z" level=info msg="StopPodSandbox for \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\" returns successfully" Jul 2 07:05:56.273495 containerd[1899]: time="2024-07-02T07:05:56.236975583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6v7x9,Uid:a23e1fef-bc43-4e75-8a26-2e335205ef00,Namespace:kube-system,Attempt:1,}" Jul 2 07:05:56.167819 systemd[1]: Started sshd@7-172.31.25.147:22-139.178.89.65:48894.service - OpenSSH per-connection server daemon (139.178.89.65:48894). Jul 2 07:05:56.187316 systemd[1]: run-netns-cni\x2d692f4fad\x2d94e0\x2decd5\x2de737\x2d92bdec4d3ce9.mount: Deactivated successfully. Jul 2 07:05:56.233864 systemd-networkd[1583]: cali67af87431b3: Gained IPv6LL Jul 2 07:05:56.434000 kernel: audit: type=1325 audit(1719903956.425:303): table=filter:105 family=2 entries=11 op=nft_register_rule pid=4776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:56.434117 kernel: audit: type=1300 audit(1719903956.425:303): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdd0e73800 a2=0 a3=7ffdd0e737ec items=0 ppid=3296 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:56.434151 kernel: audit: type=1327 audit(1719903956.425:303): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:56.425000 audit[4776]: NETFILTER_CFG table=filter:105 family=2 entries=11 op=nft_register_rule pid=4776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:56.425000 audit[4776]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdd0e73800 a2=0 a3=7ffdd0e737ec items=0 ppid=3296 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:56.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:56.427000 audit[4776]: NETFILTER_CFG table=nat:106 family=2 entries=35 op=nft_register_chain pid=4776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:56.427000 audit[4776]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdd0e73800 a2=0 a3=7ffdd0e737ec items=0 ppid=3296 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:56.441083 kernel: audit: type=1325 audit(1719903956.427:304): table=nat:106 family=2 entries=35 op=nft_register_chain pid=4776 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:56.441162 kernel: audit: type=1300 audit(1719903956.427:304): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdd0e73800 a2=0 a3=7ffdd0e737ec items=0 ppid=3296 pid=4776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:56.443800 kernel: audit: type=1327 audit(1719903956.427:304): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:56.427000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:56.619653 kernel: audit: type=1101 audit(1719903956.600:305): pid=4768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:05:56.600000 audit[4768]: USER_ACCT pid=4768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:05:56.627210 sshd[4768]: Accepted publickey for core from 139.178.89.65 port 48894 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:05:56.631953 kernel: audit: type=1103 audit(1719903956.627:306): pid=4768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:05:56.627000 audit[4768]: CRED_ACQ pid=4768 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:05:56.634820 kernel: audit: type=1006 audit(1719903956.631:307): pid=4768 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jul 2 07:05:56.631000 audit[4768]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd33ebae0 a2=3 a3=7f372b630480 items=0 ppid=1 pid=4768 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:56.631000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:05:56.641135 sshd[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:05:56.663727 systemd-logind[1877]: New session 8 of user core. Jul 2 07:05:56.669129 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 07:05:56.686000 audit[4768]: USER_START pid=4768 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:05:56.688000 audit[4807]: CRED_ACQ pid=4807 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:05:56.802290 systemd-networkd[1583]: cali2e8e2bbf300: Link UP Jul 2 07:05:56.806627 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:05:56.806745 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2e8e2bbf300: link becomes ready Jul 2 07:05:56.806931 systemd-networkd[1583]: cali2e8e2bbf300: Gained carrier Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.476 [INFO][4759] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0 csi-node-driver- calico-system 599f2da3-47a6-4b46-afa0-843612b872b6 704 0 2024-07-02 07:05:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-25-147 csi-node-driver-fxlxj eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali2e8e2bbf300 [] []}} ContainerID="56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" Namespace="calico-system" Pod="csi-node-driver-fxlxj" WorkloadEndpoint="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-" Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.477 [INFO][4759] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" Namespace="calico-system" Pod="csi-node-driver-fxlxj" WorkloadEndpoint="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.572 [INFO][4793] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" HandleID="k8s-pod-network.56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" Workload="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.587 [INFO][4793] ipam_plugin.go 264: Auto assigning IP ContainerID="56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" HandleID="k8s-pod-network.56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" Workload="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310260), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-25-147", "pod":"csi-node-driver-fxlxj", "timestamp":"2024-07-02 07:05:56.572425687 +0000 UTC"}, Hostname:"ip-172-31-25-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.587 [INFO][4793] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.588 [INFO][4793] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.588 [INFO][4793] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-147' Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.590 [INFO][4793] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" host="ip-172-31-25-147" Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.642 [INFO][4793] ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-147" Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.706 [INFO][4793] ipam.go 489: Trying affinity for 192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.730 [INFO][4793] ipam.go 155: Attempting to load block cidr=192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.733 [INFO][4793] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.733 [INFO][4793] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.0/26 handle="k8s-pod-network.56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" host="ip-172-31-25-147" Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.738 [INFO][4793] ipam.go 1685: Creating new handle: k8s-pod-network.56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.745 [INFO][4793] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.0/26 handle="k8s-pod-network.56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" host="ip-172-31-25-147" Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.760 [INFO][4793] ipam.go 1216: Successfully claimed IPs: [192.168.14.3/26] block=192.168.14.0/26 handle="k8s-pod-network.56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" host="ip-172-31-25-147" Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.760 [INFO][4793] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.3/26] handle="k8s-pod-network.56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" host="ip-172-31-25-147" Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.763 [INFO][4793] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:05:56.888943 containerd[1899]: 2024-07-02 07:05:56.763 [INFO][4793] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.3/26] IPv6=[] ContainerID="56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" HandleID="k8s-pod-network.56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" Workload="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:05:56.895772 containerd[1899]: 2024-07-02 07:05:56.792 [INFO][4759] k8s.go 386: Populated endpoint ContainerID="56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" Namespace="calico-system" Pod="csi-node-driver-fxlxj" WorkloadEndpoint="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"599f2da3-47a6-4b46-afa0-843612b872b6", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"", Pod:"csi-node-driver-fxlxj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2e8e2bbf300", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:05:56.895772 containerd[1899]: 2024-07-02 07:05:56.792 [INFO][4759] k8s.go 387: Calico CNI using IPs: [192.168.14.3/32] ContainerID="56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" Namespace="calico-system" Pod="csi-node-driver-fxlxj" WorkloadEndpoint="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:05:56.895772 containerd[1899]: 2024-07-02 07:05:56.792 [INFO][4759] dataplane_linux.go 68: Setting the host side veth name to cali2e8e2bbf300 ContainerID="56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" Namespace="calico-system" Pod="csi-node-driver-fxlxj" WorkloadEndpoint="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:05:56.895772 containerd[1899]: 2024-07-02 07:05:56.807 [INFO][4759] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" Namespace="calico-system" Pod="csi-node-driver-fxlxj" WorkloadEndpoint="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:05:56.895772 containerd[1899]: 2024-07-02 07:05:56.811 [INFO][4759] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" Namespace="calico-system" Pod="csi-node-driver-fxlxj" WorkloadEndpoint="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"599f2da3-47a6-4b46-afa0-843612b872b6", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a", Pod:"csi-node-driver-fxlxj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2e8e2bbf300", MAC:"1e:fd:12:03:9f:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:05:56.895772 containerd[1899]: 2024-07-02 07:05:56.843 [INFO][4759] k8s.go 500: Wrote updated endpoint to datastore ContainerID="56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a" Namespace="calico-system" Pod="csi-node-driver-fxlxj" WorkloadEndpoint="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:05:56.926000 audit[4820]: NETFILTER_CFG table=filter:107 family=2 entries=38 op=nft_register_chain pid=4820 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:05:56.926000 audit[4820]: SYSCALL arch=c000003e syscall=46 success=yes exit=19828 a0=3 a1=7ffe45204d50 a2=0 a3=7ffe45204d3c items=0 ppid=4247 pid=4820 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:56.926000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:05:56.940019 systemd-networkd[1583]: calibf5211c82fe: Link UP Jul 2 07:05:56.941303 systemd-networkd[1583]: calibf5211c82fe: Gained carrier Jul 2 07:05:56.941675 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibf5211c82fe: link becomes ready Jul 2 07:05:56.944068 systemd-networkd[1583]: cali56cee1158e7: Gained IPv6LL Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.527 [INFO][4778] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0 coredns-5dd5756b68- kube-system a23e1fef-bc43-4e75-8a26-2e335205ef00 703 0 2024-07-02 07:05:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-25-147 coredns-5dd5756b68-6v7x9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibf5211c82fe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" Namespace="kube-system" Pod="coredns-5dd5756b68-6v7x9" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-" Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.528 [INFO][4778] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" Namespace="kube-system" Pod="coredns-5dd5756b68-6v7x9" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.699 [INFO][4799] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" HandleID="k8s-pod-network.50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.741 [INFO][4799] ipam_plugin.go 264: Auto assigning IP ContainerID="50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" HandleID="k8s-pod-network.50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003105d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-25-147", "pod":"coredns-5dd5756b68-6v7x9", "timestamp":"2024-07-02 07:05:56.699278252 +0000 UTC"}, Hostname:"ip-172-31-25-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.741 [INFO][4799] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.760 [INFO][4799] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.761 [INFO][4799] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-147' Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.765 [INFO][4799] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" host="ip-172-31-25-147" Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.774 [INFO][4799] ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-147" Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.820 [INFO][4799] ipam.go 489: Trying affinity for 192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.859 [INFO][4799] ipam.go 155: Attempting to load block cidr=192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.867 [INFO][4799] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.867 [INFO][4799] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.0/26 handle="k8s-pod-network.50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" host="ip-172-31-25-147" Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.873 [INFO][4799] ipam.go 1685: Creating new handle: k8s-pod-network.50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.907 [INFO][4799] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.0/26 handle="k8s-pod-network.50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" host="ip-172-31-25-147" Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.917 [INFO][4799] ipam.go 1216: Successfully claimed IPs: [192.168.14.4/26] block=192.168.14.0/26 handle="k8s-pod-network.50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" host="ip-172-31-25-147" Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.917 [INFO][4799] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.4/26] handle="k8s-pod-network.50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" host="ip-172-31-25-147" Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.917 [INFO][4799] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:05:56.989206 containerd[1899]: 2024-07-02 07:05:56.917 [INFO][4799] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.4/26] IPv6=[] ContainerID="50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" HandleID="k8s-pod-network.50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:05:56.990638 containerd[1899]: 2024-07-02 07:05:56.933 [INFO][4778] k8s.go 386: Populated endpoint ContainerID="50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" Namespace="kube-system" Pod="coredns-5dd5756b68-6v7x9" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a23e1fef-bc43-4e75-8a26-2e335205ef00", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"", Pod:"coredns-5dd5756b68-6v7x9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf5211c82fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:05:56.990638 containerd[1899]: 2024-07-02 07:05:56.933 [INFO][4778] k8s.go 387: Calico CNI using IPs: [192.168.14.4/32] ContainerID="50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" Namespace="kube-system" Pod="coredns-5dd5756b68-6v7x9" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:05:56.990638 containerd[1899]: 2024-07-02 07:05:56.933 [INFO][4778] dataplane_linux.go 68: Setting the host side veth name to calibf5211c82fe ContainerID="50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" Namespace="kube-system" Pod="coredns-5dd5756b68-6v7x9" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:05:56.990638 containerd[1899]: 2024-07-02 07:05:56.942 [INFO][4778] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" Namespace="kube-system" Pod="coredns-5dd5756b68-6v7x9" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:05:56.990638 containerd[1899]: 2024-07-02 07:05:56.942 [INFO][4778] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" Namespace="kube-system" Pod="coredns-5dd5756b68-6v7x9" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a23e1fef-bc43-4e75-8a26-2e335205ef00", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f", Pod:"coredns-5dd5756b68-6v7x9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf5211c82fe", MAC:"fe:d5:7b:a5:26:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:05:56.991795 containerd[1899]: 2024-07-02 07:05:56.971 [INFO][4778] k8s.go 500: Wrote updated endpoint to datastore ContainerID="50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f" Namespace="kube-system" Pod="coredns-5dd5756b68-6v7x9" WorkloadEndpoint="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:05:57.160000 audit[4880]: NETFILTER_CFG table=filter:108 family=2 entries=38 op=nft_register_chain pid=4880 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:05:57.164917 containerd[1899]: time="2024-07-02T07:05:57.162599043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:57.164917 containerd[1899]: time="2024-07-02T07:05:57.162697448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:57.164917 containerd[1899]: time="2024-07-02T07:05:57.162727966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:57.164917 containerd[1899]: time="2024-07-02T07:05:57.162748474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:57.174873 containerd[1899]: time="2024-07-02T07:05:57.174787816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:57.174997 containerd[1899]: time="2024-07-02T07:05:57.174918264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:57.174997 containerd[1899]: time="2024-07-02T07:05:57.174960997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:57.175094 containerd[1899]: time="2024-07-02T07:05:57.174997899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:57.160000 audit[4880]: SYSCALL arch=c000003e syscall=46 success=yes exit=19408 a0=3 a1=7ffc610f4510 a2=0 a3=7ffc610f44fc items=0 ppid=4247 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:57.160000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:05:57.337106 sshd[4768]: pam_unix(sshd:session): session closed for user core Jul 2 07:05:57.342000 audit[4768]: USER_END pid=4768 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:05:57.343000 audit[4768]: CRED_DISP pid=4768 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:05:57.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.25.147:22-139.178.89.65:48894 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:05:57.348664 systemd[1]: sshd@7-172.31.25.147:22-139.178.89.65:48894.service: Deactivated successfully. Jul 2 07:05:57.354288 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 07:05:57.356375 systemd-logind[1877]: Session 8 logged out. Waiting for processes to exit. Jul 2 07:05:57.364794 systemd-logind[1877]: Removed session 8. Jul 2 07:05:57.373729 systemd[1]: run-containerd-runc-k8s.io-56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a-runc.oGwF2s.mount: Deactivated successfully. Jul 2 07:05:57.492117 containerd[1899]: time="2024-07-02T07:05:57.492072541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-6v7x9,Uid:a23e1fef-bc43-4e75-8a26-2e335205ef00,Namespace:kube-system,Attempt:1,} returns sandbox id \"50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f\"" Jul 2 07:05:57.510951 containerd[1899]: time="2024-07-02T07:05:57.506599270Z" level=info msg="CreateContainer within sandbox \"50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:05:57.538364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3307386869.mount: Deactivated successfully. Jul 2 07:05:57.578620 containerd[1899]: time="2024-07-02T07:05:57.564728886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fxlxj,Uid:599f2da3-47a6-4b46-afa0-843612b872b6,Namespace:calico-system,Attempt:1,} returns sandbox id \"56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a\"" Jul 2 07:05:57.578620 containerd[1899]: time="2024-07-02T07:05:57.566764156Z" level=info msg="CreateContainer within sandbox \"50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f67197872e8890dbcd5766384bf861f10255034735fc54fec2d4ef3bbd4e9094\"" Jul 2 07:05:57.578620 containerd[1899]: time="2024-07-02T07:05:57.567510276Z" level=info msg="StartContainer for \"f67197872e8890dbcd5766384bf861f10255034735fc54fec2d4ef3bbd4e9094\"" Jul 2 07:05:57.848078 containerd[1899]: time="2024-07-02T07:05:57.845600453Z" level=info msg="StartContainer for \"f67197872e8890dbcd5766384bf861f10255034735fc54fec2d4ef3bbd4e9094\" returns successfully" Jul 2 07:05:58.032721 systemd-networkd[1583]: cali2e8e2bbf300: Gained IPv6LL Jul 2 07:05:58.394088 kubelet[3060]: I0702 07:05:58.394058 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6v7x9" podStartSLOduration=37.394009149 podCreationTimestamp="2024-07-02 07:05:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:05:58.392758601 +0000 UTC m=+48.757832185" watchObservedRunningTime="2024-07-02 07:05:58.394009149 +0000 UTC m=+48.759082727" Jul 2 07:05:58.468000 audit[4976]: NETFILTER_CFG table=filter:109 family=2 entries=8 op=nft_register_rule pid=4976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:58.468000 audit[4976]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffd2fc2afd0 a2=0 a3=7ffd2fc2afbc items=0 ppid=3296 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:58.468000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:58.469000 audit[4976]: NETFILTER_CFG table=nat:110 family=2 entries=44 op=nft_register_rule pid=4976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:58.469000 audit[4976]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd2fc2afd0 a2=0 a3=7ffd2fc2afbc items=0 ppid=3296 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:58.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:58.489000 audit[4978]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=4978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:58.489000 audit[4978]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffff92daee0 a2=0 a3=7ffff92daecc items=0 ppid=3296 pid=4978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:58.489000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:58.507000 audit[4978]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4978 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:05:58.507000 audit[4978]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffff92daee0 a2=0 a3=7ffff92daecc items=0 ppid=3296 pid=4978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:05:58.507000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:05:58.672940 systemd-networkd[1583]: calibf5211c82fe: Gained IPv6LL Jul 2 07:05:59.245199 containerd[1899]: time="2024-07-02T07:05:59.245147579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:59.247255 containerd[1899]: time="2024-07-02T07:05:59.247205191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 07:05:59.250944 containerd[1899]: time="2024-07-02T07:05:59.250792907Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:59.259841 containerd[1899]: time="2024-07-02T07:05:59.259801905Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:59.276574 containerd[1899]: time="2024-07-02T07:05:59.266343460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.906534644s" Jul 2 07:05:59.276860 containerd[1899]: time="2024-07-02T07:05:59.276815138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 07:05:59.279411 containerd[1899]: time="2024-07-02T07:05:59.279378170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 07:05:59.280238 containerd[1899]: time="2024-07-02T07:05:59.280209622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:59.344839 containerd[1899]: time="2024-07-02T07:05:59.344780046Z" level=info msg="CreateContainer within sandbox \"04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 07:05:59.402195 containerd[1899]: time="2024-07-02T07:05:59.402117608Z" level=info msg="CreateContainer within sandbox \"04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"eeabeae56c06cd36e75a03fd512ffe6d20c5c11a428427a00ad9f668aabcf8cf\"" Jul 2 07:05:59.403909 containerd[1899]: time="2024-07-02T07:05:59.403520872Z" level=info msg="StartContainer for \"eeabeae56c06cd36e75a03fd512ffe6d20c5c11a428427a00ad9f668aabcf8cf\"" Jul 2 07:05:59.480042 systemd[1]: run-containerd-runc-k8s.io-eeabeae56c06cd36e75a03fd512ffe6d20c5c11a428427a00ad9f668aabcf8cf-runc.bfpxfw.mount: Deactivated successfully. Jul 2 07:05:59.582612 containerd[1899]: time="2024-07-02T07:05:59.582488576Z" level=info msg="StartContainer for \"eeabeae56c06cd36e75a03fd512ffe6d20c5c11a428427a00ad9f668aabcf8cf\" returns successfully" Jul 2 07:06:00.570268 systemd[1]: run-containerd-runc-k8s.io-eeabeae56c06cd36e75a03fd512ffe6d20c5c11a428427a00ad9f668aabcf8cf-runc.oDgsTE.mount: Deactivated successfully. Jul 2 07:06:00.777761 kubelet[3060]: I0702 07:06:00.777717 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5fc9685798-vqxr2" podStartSLOduration=28.85962248 podCreationTimestamp="2024-07-02 07:05:28 +0000 UTC" firstStartedPulling="2024-07-02 07:05:55.359367149 +0000 UTC m=+45.724440710" lastFinishedPulling="2024-07-02 07:05:59.277408432 +0000 UTC m=+49.642482003" observedRunningTime="2024-07-02 07:06:00.448822036 +0000 UTC m=+50.813895619" watchObservedRunningTime="2024-07-02 07:06:00.777663773 +0000 UTC m=+51.142737354" Jul 2 07:06:01.057860 containerd[1899]: time="2024-07-02T07:06:01.057808910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:06:01.064403 containerd[1899]: time="2024-07-02T07:06:01.064333442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 07:06:01.071534 containerd[1899]: time="2024-07-02T07:06:01.071457669Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:06:01.080014 containerd[1899]: time="2024-07-02T07:06:01.079970312Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:06:01.083110 containerd[1899]: time="2024-07-02T07:06:01.083065237Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:06:01.084881 containerd[1899]: time="2024-07-02T07:06:01.084838032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.804909219s" Jul 2 07:06:01.085018 containerd[1899]: time="2024-07-02T07:06:01.084887901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 07:06:01.087497 containerd[1899]: time="2024-07-02T07:06:01.087450358Z" level=info msg="CreateContainer within sandbox \"56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 07:06:01.157262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3434216734.mount: Deactivated successfully. Jul 2 07:06:01.160835 containerd[1899]: time="2024-07-02T07:06:01.160654511Z" level=info msg="CreateContainer within sandbox \"56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"31af0ed2c1a72e09a4669a81b022cdd287d60fc9081a80c3e58aedd1cff4b4ec\"" Jul 2 07:06:01.161496 containerd[1899]: time="2024-07-02T07:06:01.161460166Z" level=info msg="StartContainer for \"31af0ed2c1a72e09a4669a81b022cdd287d60fc9081a80c3e58aedd1cff4b4ec\"" Jul 2 07:06:01.292170 containerd[1899]: time="2024-07-02T07:06:01.292099463Z" level=info msg="StartContainer for \"31af0ed2c1a72e09a4669a81b022cdd287d60fc9081a80c3e58aedd1cff4b4ec\" returns successfully" Jul 2 07:06:01.294681 containerd[1899]: time="2024-07-02T07:06:01.293743301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 07:06:02.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.25.147:22-139.178.89.65:35650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:02.397289 systemd[1]: Started sshd@8-172.31.25.147:22-139.178.89.65:35650.service - OpenSSH per-connection server daemon (139.178.89.65:35650). Jul 2 07:06:02.399286 kernel: kauditd_printk_skb: 25 callbacks suppressed Jul 2 07:06:02.399612 kernel: audit: type=1130 audit(1719903962.397:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.25.147:22-139.178.89.65:35650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:02.919000 audit[5090]: USER_ACCT pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:02.923812 kernel: audit: type=1101 audit(1719903962.919:320): pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:02.929666 kernel: audit: type=1103 audit(1719903962.925:321): pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:02.925000 audit[5090]: CRED_ACQ pid=5090 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:02.929909 sshd[5090]: Accepted publickey for core from 139.178.89.65 port 35650 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:02.935507 kernel: audit: type=1006 audit(1719903962.929:322): pid=5090 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jul 2 07:06:02.935657 kernel: audit: type=1300 audit(1719903962.929:322): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5000c230 a2=3 a3=7f21d5838480 items=0 ppid=1 pid=5090 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:02.929000 audit[5090]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5000c230 a2=3 a3=7f21d5838480 items=0 ppid=1 pid=5090 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:02.929000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:02.937644 kernel: audit: type=1327 audit(1719903962.929:322): proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:02.942180 sshd[5090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:03.002110 systemd-logind[1877]: New session 9 of user core. Jul 2 07:06:03.005002 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 07:06:03.023000 audit[5090]: USER_START pid=5090 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:03.031399 kernel: audit: type=1105 audit(1719903963.023:323): pid=5090 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:03.031576 kernel: audit: type=1103 audit(1719903963.027:324): pid=5093 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:03.027000 audit[5093]: CRED_ACQ pid=5093 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:03.877532 sshd[5090]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:03.891950 kernel: audit: type=1106 audit(1719903963.881:325): pid=5090 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:03.892069 kernel: audit: type=1104 audit(1719903963.884:326): pid=5090 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:03.881000 audit[5090]: USER_END pid=5090 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:03.884000 audit[5090]: CRED_DISP pid=5090 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:03.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.25.147:22-139.178.89.65:35650 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:03.889984 systemd[1]: sshd@8-172.31.25.147:22-139.178.89.65:35650.service: Deactivated successfully. Jul 2 07:06:03.891328 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 07:06:03.895209 systemd-logind[1877]: Session 9 logged out. Waiting for processes to exit. Jul 2 07:06:03.895850 containerd[1899]: time="2024-07-02T07:06:03.895334756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:06:03.896823 systemd-logind[1877]: Removed session 9. Jul 2 07:06:03.899015 containerd[1899]: time="2024-07-02T07:06:03.898927608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 07:06:03.901446 containerd[1899]: time="2024-07-02T07:06:03.901403440Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:06:03.908485 containerd[1899]: time="2024-07-02T07:06:03.908302580Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:06:03.913618 containerd[1899]: time="2024-07-02T07:06:03.913547439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:06:03.916973 containerd[1899]: time="2024-07-02T07:06:03.916906129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.623117669s" Jul 2 07:06:03.917160 containerd[1899]: time="2024-07-02T07:06:03.917138106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 07:06:03.920276 containerd[1899]: time="2024-07-02T07:06:03.920213867Z" level=info msg="CreateContainer within sandbox \"56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 07:06:03.966595 containerd[1899]: time="2024-07-02T07:06:03.966518890Z" level=info msg="CreateContainer within sandbox \"56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9542484b480225a62cb575a199c98057f653b7e24109e432ff9a4632c4b00653\"" Jul 2 07:06:03.967372 containerd[1899]: time="2024-07-02T07:06:03.967335473Z" level=info msg="StartContainer for \"9542484b480225a62cb575a199c98057f653b7e24109e432ff9a4632c4b00653\"" Jul 2 07:06:04.088043 systemd[1]: run-containerd-runc-k8s.io-9542484b480225a62cb575a199c98057f653b7e24109e432ff9a4632c4b00653-runc.m5TPiu.mount: Deactivated successfully. Jul 2 07:06:04.159506 containerd[1899]: time="2024-07-02T07:06:04.159392576Z" level=info msg="StartContainer for \"9542484b480225a62cb575a199c98057f653b7e24109e432ff9a4632c4b00653\" returns successfully" Jul 2 07:06:04.532748 kubelet[3060]: I0702 07:06:04.532700 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-fxlxj" podStartSLOduration=30.181250339 podCreationTimestamp="2024-07-02 07:05:28 +0000 UTC" firstStartedPulling="2024-07-02 07:05:57.566202629 +0000 UTC m=+47.931276223" lastFinishedPulling="2024-07-02 07:06:03.91758055 +0000 UTC m=+54.282654123" observedRunningTime="2024-07-02 07:06:04.532262265 +0000 UTC m=+54.897335872" watchObservedRunningTime="2024-07-02 07:06:04.532628239 +0000 UTC m=+54.897701821" Jul 2 07:06:05.112707 kubelet[3060]: I0702 07:06:05.112668 3060 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 07:06:05.113148 kubelet[3060]: I0702 07:06:05.112735 3060 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 07:06:08.902077 systemd[1]: Started sshd@9-172.31.25.147:22-139.178.89.65:48394.service - OpenSSH per-connection server daemon (139.178.89.65:48394). Jul 2 07:06:08.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.25.147:22-139.178.89.65:48394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:08.907377 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:06:08.907793 kernel: audit: type=1130 audit(1719903968.901:328): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.25.147:22-139.178.89.65:48394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:09.083416 kernel: audit: type=1101 audit(1719903969.073:329): pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:09.083533 kernel: audit: type=1103 audit(1719903969.075:330): pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:09.087583 kernel: audit: type=1006 audit(1719903969.075:331): pid=5172 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 2 07:06:09.087651 kernel: audit: type=1300 audit(1719903969.075:331): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea2dcda20 a2=3 a3=7fdb0734a480 items=0 ppid=1 pid=5172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:09.087684 kernel: audit: type=1327 audit(1719903969.075:331): proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:09.073000 audit[5172]: USER_ACCT pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:09.075000 audit[5172]: CRED_ACQ pid=5172 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:09.075000 audit[5172]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea2dcda20 a2=3 a3=7fdb0734a480 items=0 ppid=1 pid=5172 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:09.075000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:09.087953 sshd[5172]: Accepted publickey for core from 139.178.89.65 port 48394 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:09.077282 sshd[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:09.084678 systemd-logind[1877]: New session 10 of user core. Jul 2 07:06:09.089920 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 07:06:09.099000 audit[5172]: USER_START pid=5172 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:09.104698 kernel: audit: type=1105 audit(1719903969.099:332): pid=5172 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:09.103000 audit[5175]: CRED_ACQ pid=5175 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:09.108705 kernel: audit: type=1103 audit(1719903969.103:333): pid=5175 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:09.319519 sshd[5172]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:09.321000 audit[5172]: USER_END pid=5172 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:09.325663 kernel: audit: type=1106 audit(1719903969.321:334): pid=5172 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:09.324499 systemd-logind[1877]: Session 10 logged out. Waiting for processes to exit. Jul 2 07:06:09.321000 audit[5172]: CRED_DISP pid=5172 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:09.328789 kernel: audit: type=1104 audit(1719903969.321:335): pid=5172 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:09.326982 systemd[1]: sshd@9-172.31.25.147:22-139.178.89.65:48394.service: Deactivated successfully. Jul 2 07:06:09.328209 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 07:06:09.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.25.147:22-139.178.89.65:48394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:09.330365 systemd-logind[1877]: Removed session 10. Jul 2 07:06:09.900421 containerd[1899]: time="2024-07-02T07:06:09.900331404Z" level=info msg="StopPodSandbox for \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\"" Jul 2 07:06:10.056157 containerd[1899]: 2024-07-02 07:06:09.955 [WARNING][5201] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"599f2da3-47a6-4b46-afa0-843612b872b6", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a", Pod:"csi-node-driver-fxlxj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2e8e2bbf300", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:06:10.056157 containerd[1899]: 2024-07-02 07:06:09.956 [INFO][5201] k8s.go 608: Cleaning up netns ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:06:10.056157 containerd[1899]: 2024-07-02 07:06:09.956 [INFO][5201] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" iface="eth0" netns="" Jul 2 07:06:10.056157 containerd[1899]: 2024-07-02 07:06:09.956 [INFO][5201] k8s.go 615: Releasing IP address(es) ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:06:10.056157 containerd[1899]: 2024-07-02 07:06:09.956 [INFO][5201] utils.go 188: Calico CNI releasing IP address ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:06:10.056157 containerd[1899]: 2024-07-02 07:06:10.017 [INFO][5207] ipam_plugin.go 411: Releasing address using handleID ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" HandleID="k8s-pod-network.a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Workload="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:06:10.056157 containerd[1899]: 2024-07-02 07:06:10.018 [INFO][5207] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:06:10.056157 containerd[1899]: 2024-07-02 07:06:10.018 [INFO][5207] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:06:10.056157 containerd[1899]: 2024-07-02 07:06:10.033 [WARNING][5207] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" HandleID="k8s-pod-network.a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Workload="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:06:10.056157 containerd[1899]: 2024-07-02 07:06:10.034 [INFO][5207] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" HandleID="k8s-pod-network.a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Workload="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:06:10.056157 containerd[1899]: 2024-07-02 07:06:10.043 [INFO][5207] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:06:10.056157 containerd[1899]: 2024-07-02 07:06:10.048 [INFO][5201] k8s.go 621: Teardown processing complete. ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:06:10.062630 containerd[1899]: time="2024-07-02T07:06:10.062539956Z" level=info msg="TearDown network for sandbox \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\" successfully" Jul 2 07:06:10.062773 containerd[1899]: time="2024-07-02T07:06:10.062747425Z" level=info msg="StopPodSandbox for \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\" returns successfully" Jul 2 07:06:10.068201 containerd[1899]: time="2024-07-02T07:06:10.068155568Z" level=info msg="RemovePodSandbox for \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\"" Jul 2 07:06:10.123989 containerd[1899]: time="2024-07-02T07:06:10.068205223Z" level=info msg="Forcibly stopping sandbox \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\"" Jul 2 07:06:10.208414 containerd[1899]: 2024-07-02 07:06:10.162 [WARNING][5226] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"599f2da3-47a6-4b46-afa0-843612b872b6", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"56ff1ecbfaf8cf8296dbb49e7ef929969718543e0beafc9d0cc94f9e8595288a", Pod:"csi-node-driver-fxlxj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali2e8e2bbf300", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:06:10.208414 containerd[1899]: 2024-07-02 07:06:10.163 [INFO][5226] k8s.go 608: Cleaning up netns ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:06:10.208414 containerd[1899]: 2024-07-02 07:06:10.163 [INFO][5226] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" iface="eth0" netns="" Jul 2 07:06:10.208414 containerd[1899]: 2024-07-02 07:06:10.163 [INFO][5226] k8s.go 615: Releasing IP address(es) ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:06:10.208414 containerd[1899]: 2024-07-02 07:06:10.163 [INFO][5226] utils.go 188: Calico CNI releasing IP address ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:06:10.208414 containerd[1899]: 2024-07-02 07:06:10.197 [INFO][5232] ipam_plugin.go 411: Releasing address using handleID ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" HandleID="k8s-pod-network.a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Workload="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:06:10.208414 containerd[1899]: 2024-07-02 07:06:10.197 [INFO][5232] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:06:10.208414 containerd[1899]: 2024-07-02 07:06:10.197 [INFO][5232] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:06:10.208414 containerd[1899]: 2024-07-02 07:06:10.203 [WARNING][5232] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" HandleID="k8s-pod-network.a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Workload="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:06:10.208414 containerd[1899]: 2024-07-02 07:06:10.203 [INFO][5232] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" HandleID="k8s-pod-network.a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Workload="ip--172--31--25--147-k8s-csi--node--driver--fxlxj-eth0" Jul 2 07:06:10.208414 containerd[1899]: 2024-07-02 07:06:10.205 [INFO][5232] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:06:10.208414 containerd[1899]: 2024-07-02 07:06:10.206 [INFO][5226] k8s.go 621: Teardown processing complete. ContainerID="a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315" Jul 2 07:06:10.209088 containerd[1899]: time="2024-07-02T07:06:10.208655022Z" level=info msg="TearDown network for sandbox \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\" successfully" Jul 2 07:06:10.232607 containerd[1899]: time="2024-07-02T07:06:10.232535259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 07:06:10.237851 containerd[1899]: time="2024-07-02T07:06:10.237793349Z" level=info msg="RemovePodSandbox \"a6c52791aa88cd0058db344d9cd22b40bbdc8c75f2903f77545e072fe6520315\" returns successfully" Jul 2 07:06:10.238534 containerd[1899]: time="2024-07-02T07:06:10.238500292Z" level=info msg="StopPodSandbox for \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\"" Jul 2 07:06:10.327122 containerd[1899]: 2024-07-02 07:06:10.288 [WARNING][5250] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c8048bda-b540-4626-b35e-23b1e8b8fc9d", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9", Pod:"coredns-5dd5756b68-c58xl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67af87431b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:06:10.327122 containerd[1899]: 2024-07-02 07:06:10.288 [INFO][5250] k8s.go 608: Cleaning up netns ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:06:10.327122 containerd[1899]: 2024-07-02 07:06:10.288 [INFO][5250] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" iface="eth0" netns="" Jul 2 07:06:10.327122 containerd[1899]: 2024-07-02 07:06:10.288 [INFO][5250] k8s.go 615: Releasing IP address(es) ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:06:10.327122 containerd[1899]: 2024-07-02 07:06:10.288 [INFO][5250] utils.go 188: Calico CNI releasing IP address ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:06:10.327122 containerd[1899]: 2024-07-02 07:06:10.312 [INFO][5256] ipam_plugin.go 411: Releasing address using handleID ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" HandleID="k8s-pod-network.b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:06:10.327122 containerd[1899]: 2024-07-02 07:06:10.313 [INFO][5256] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:06:10.327122 containerd[1899]: 2024-07-02 07:06:10.313 [INFO][5256] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:06:10.327122 containerd[1899]: 2024-07-02 07:06:10.322 [WARNING][5256] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" HandleID="k8s-pod-network.b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:06:10.327122 containerd[1899]: 2024-07-02 07:06:10.322 [INFO][5256] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" HandleID="k8s-pod-network.b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:06:10.327122 containerd[1899]: 2024-07-02 07:06:10.323 [INFO][5256] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:06:10.327122 containerd[1899]: 2024-07-02 07:06:10.325 [INFO][5250] k8s.go 621: Teardown processing complete. ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:06:10.328508 containerd[1899]: time="2024-07-02T07:06:10.327165906Z" level=info msg="TearDown network for sandbox \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\" successfully" Jul 2 07:06:10.328508 containerd[1899]: time="2024-07-02T07:06:10.327203039Z" level=info msg="StopPodSandbox for \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\" returns successfully" Jul 2 07:06:10.328508 containerd[1899]: time="2024-07-02T07:06:10.327716729Z" level=info msg="RemovePodSandbox for \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\"" Jul 2 07:06:10.328508 containerd[1899]: time="2024-07-02T07:06:10.327754217Z" level=info msg="Forcibly stopping sandbox \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\"" Jul 2 07:06:10.426540 containerd[1899]: 2024-07-02 07:06:10.376 [WARNING][5277] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"c8048bda-b540-4626-b35e-23b1e8b8fc9d", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"6a99d6c6714b790fccbb47c9953531eb138ec6b81fa7ba23fdcae3e4b4f337f9", Pod:"coredns-5dd5756b68-c58xl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67af87431b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:06:10.426540 containerd[1899]: 2024-07-02 07:06:10.377 [INFO][5277] k8s.go 608: Cleaning up netns ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:06:10.426540 containerd[1899]: 2024-07-02 07:06:10.377 [INFO][5277] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" iface="eth0" netns="" Jul 2 07:06:10.426540 containerd[1899]: 2024-07-02 07:06:10.377 [INFO][5277] k8s.go 615: Releasing IP address(es) ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:06:10.426540 containerd[1899]: 2024-07-02 07:06:10.377 [INFO][5277] utils.go 188: Calico CNI releasing IP address ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:06:10.426540 containerd[1899]: 2024-07-02 07:06:10.411 [INFO][5283] ipam_plugin.go 411: Releasing address using handleID ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" HandleID="k8s-pod-network.b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:06:10.426540 containerd[1899]: 2024-07-02 07:06:10.411 [INFO][5283] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:06:10.426540 containerd[1899]: 2024-07-02 07:06:10.411 [INFO][5283] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:06:10.426540 containerd[1899]: 2024-07-02 07:06:10.420 [WARNING][5283] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" HandleID="k8s-pod-network.b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:06:10.426540 containerd[1899]: 2024-07-02 07:06:10.420 [INFO][5283] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" HandleID="k8s-pod-network.b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--c58xl-eth0" Jul 2 07:06:10.426540 containerd[1899]: 2024-07-02 07:06:10.423 [INFO][5283] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:06:10.426540 containerd[1899]: 2024-07-02 07:06:10.425 [INFO][5277] k8s.go 621: Teardown processing complete. ContainerID="b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814" Jul 2 07:06:10.427419 containerd[1899]: time="2024-07-02T07:06:10.426645882Z" level=info msg="TearDown network for sandbox \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\" successfully" Jul 2 07:06:10.447671 containerd[1899]: time="2024-07-02T07:06:10.447626507Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 07:06:10.447845 containerd[1899]: time="2024-07-02T07:06:10.447698858Z" level=info msg="RemovePodSandbox \"b953f40dfc33a0f888350fae847a6750b12cd50501ad54247bfddb5a8696b814\" returns successfully" Jul 2 07:06:10.448328 containerd[1899]: time="2024-07-02T07:06:10.448294536Z" level=info msg="StopPodSandbox for \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\"" Jul 2 07:06:10.532821 containerd[1899]: 2024-07-02 07:06:10.490 [WARNING][5302] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a23e1fef-bc43-4e75-8a26-2e335205ef00", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f", Pod:"coredns-5dd5756b68-6v7x9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf5211c82fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:06:10.532821 containerd[1899]: 2024-07-02 07:06:10.490 [INFO][5302] k8s.go 608: Cleaning up netns ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:06:10.532821 containerd[1899]: 2024-07-02 07:06:10.490 [INFO][5302] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" iface="eth0" netns="" Jul 2 07:06:10.532821 containerd[1899]: 2024-07-02 07:06:10.490 [INFO][5302] k8s.go 615: Releasing IP address(es) ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:06:10.532821 containerd[1899]: 2024-07-02 07:06:10.490 [INFO][5302] utils.go 188: Calico CNI releasing IP address ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:06:10.532821 containerd[1899]: 2024-07-02 07:06:10.515 [INFO][5310] ipam_plugin.go 411: Releasing address using handleID ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" HandleID="k8s-pod-network.f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:06:10.532821 containerd[1899]: 2024-07-02 07:06:10.516 [INFO][5310] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:06:10.532821 containerd[1899]: 2024-07-02 07:06:10.516 [INFO][5310] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:06:10.532821 containerd[1899]: 2024-07-02 07:06:10.526 [WARNING][5310] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" HandleID="k8s-pod-network.f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:06:10.532821 containerd[1899]: 2024-07-02 07:06:10.526 [INFO][5310] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" HandleID="k8s-pod-network.f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:06:10.532821 containerd[1899]: 2024-07-02 07:06:10.528 [INFO][5310] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:06:10.532821 containerd[1899]: 2024-07-02 07:06:10.529 [INFO][5302] k8s.go 621: Teardown processing complete. ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:06:10.532821 containerd[1899]: time="2024-07-02T07:06:10.531153803Z" level=info msg="TearDown network for sandbox \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\" successfully" Jul 2 07:06:10.532821 containerd[1899]: time="2024-07-02T07:06:10.531193355Z" level=info msg="StopPodSandbox for \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\" returns successfully" Jul 2 07:06:10.533912 containerd[1899]: time="2024-07-02T07:06:10.533871258Z" level=info msg="RemovePodSandbox for \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\"" Jul 2 07:06:10.534034 containerd[1899]: time="2024-07-02T07:06:10.533919134Z" level=info msg="Forcibly stopping sandbox \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\"" Jul 2 07:06:10.611736 systemd[1]: run-containerd-runc-k8s.io-eeabeae56c06cd36e75a03fd512ffe6d20c5c11a428427a00ad9f668aabcf8cf-runc.glfD1q.mount: Deactivated successfully. Jul 2 07:06:10.724304 containerd[1899]: 2024-07-02 07:06:10.583 [WARNING][5328] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a23e1fef-bc43-4e75-8a26-2e335205ef00", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"50481d329efc3c10edf7d11db33493ac37ac8a0620bd838edd45608b1aac084f", Pod:"coredns-5dd5756b68-6v7x9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.14.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf5211c82fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:06:10.724304 containerd[1899]: 2024-07-02 07:06:10.584 [INFO][5328] k8s.go 608: Cleaning up netns ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:06:10.724304 containerd[1899]: 2024-07-02 07:06:10.584 [INFO][5328] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" iface="eth0" netns="" Jul 2 07:06:10.724304 containerd[1899]: 2024-07-02 07:06:10.584 [INFO][5328] k8s.go 615: Releasing IP address(es) ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:06:10.724304 containerd[1899]: 2024-07-02 07:06:10.584 [INFO][5328] utils.go 188: Calico CNI releasing IP address ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:06:10.724304 containerd[1899]: 2024-07-02 07:06:10.691 [INFO][5335] ipam_plugin.go 411: Releasing address using handleID ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" HandleID="k8s-pod-network.f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:06:10.724304 containerd[1899]: 2024-07-02 07:06:10.691 [INFO][5335] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:06:10.724304 containerd[1899]: 2024-07-02 07:06:10.691 [INFO][5335] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:06:10.724304 containerd[1899]: 2024-07-02 07:06:10.702 [WARNING][5335] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" HandleID="k8s-pod-network.f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:06:10.724304 containerd[1899]: 2024-07-02 07:06:10.702 [INFO][5335] ipam_plugin.go 439: Releasing address using workloadID ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" HandleID="k8s-pod-network.f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Workload="ip--172--31--25--147-k8s-coredns--5dd5756b68--6v7x9-eth0" Jul 2 07:06:10.724304 containerd[1899]: 2024-07-02 07:06:10.706 [INFO][5335] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:06:10.724304 containerd[1899]: 2024-07-02 07:06:10.709 [INFO][5328] k8s.go 621: Teardown processing complete. ContainerID="f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831" Jul 2 07:06:10.724304 containerd[1899]: time="2024-07-02T07:06:10.714923025Z" level=info msg="TearDown network for sandbox \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\" successfully" Jul 2 07:06:10.733167 containerd[1899]: time="2024-07-02T07:06:10.731426364Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 07:06:10.733167 containerd[1899]: time="2024-07-02T07:06:10.731522111Z" level=info msg="RemovePodSandbox \"f2eb820533dce4c8cbe6e822d38a370c34c522da64230c39e42e3cd5490f7831\" returns successfully" Jul 2 07:06:10.734031 containerd[1899]: time="2024-07-02T07:06:10.733998375Z" level=info msg="StopPodSandbox for \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\"" Jul 2 07:06:10.884726 containerd[1899]: 2024-07-02 07:06:10.831 [WARNING][5371] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0", GenerateName:"calico-kube-controllers-5fc9685798-", Namespace:"calico-system", SelfLink:"", UID:"ed2c8cf8-b127-46b4-9250-2cadb569e047", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fc9685798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022", Pod:"calico-kube-controllers-5fc9685798-vqxr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56cee1158e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:06:10.884726 containerd[1899]: 2024-07-02 07:06:10.832 [INFO][5371] k8s.go 608: Cleaning up netns ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:06:10.884726 containerd[1899]: 2024-07-02 07:06:10.832 [INFO][5371] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" iface="eth0" netns="" Jul 2 07:06:10.884726 containerd[1899]: 2024-07-02 07:06:10.832 [INFO][5371] k8s.go 615: Releasing IP address(es) ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:06:10.884726 containerd[1899]: 2024-07-02 07:06:10.832 [INFO][5371] utils.go 188: Calico CNI releasing IP address ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:06:10.884726 containerd[1899]: 2024-07-02 07:06:10.872 [INFO][5377] ipam_plugin.go 411: Releasing address using handleID ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" HandleID="k8s-pod-network.8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Workload="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:06:10.884726 containerd[1899]: 2024-07-02 07:06:10.872 [INFO][5377] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:06:10.884726 containerd[1899]: 2024-07-02 07:06:10.872 [INFO][5377] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:06:10.884726 containerd[1899]: 2024-07-02 07:06:10.878 [WARNING][5377] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" HandleID="k8s-pod-network.8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Workload="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:06:10.884726 containerd[1899]: 2024-07-02 07:06:10.878 [INFO][5377] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" HandleID="k8s-pod-network.8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Workload="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:06:10.884726 containerd[1899]: 2024-07-02 07:06:10.880 [INFO][5377] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:06:10.884726 containerd[1899]: 2024-07-02 07:06:10.881 [INFO][5371] k8s.go 621: Teardown processing complete. ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:06:10.884726 containerd[1899]: time="2024-07-02T07:06:10.883467153Z" level=info msg="TearDown network for sandbox \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\" successfully" Jul 2 07:06:10.884726 containerd[1899]: time="2024-07-02T07:06:10.883507735Z" level=info msg="StopPodSandbox for \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\" returns successfully" Jul 2 07:06:10.886017 containerd[1899]: time="2024-07-02T07:06:10.884737366Z" level=info msg="RemovePodSandbox for \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\"" Jul 2 07:06:10.886017 containerd[1899]: time="2024-07-02T07:06:10.884785124Z" level=info msg="Forcibly stopping sandbox \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\"" Jul 2 07:06:10.968282 containerd[1899]: 2024-07-02 07:06:10.929 [WARNING][5400] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0", GenerateName:"calico-kube-controllers-5fc9685798-", Namespace:"calico-system", SelfLink:"", UID:"ed2c8cf8-b127-46b4-9250-2cadb569e047", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fc9685798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"04278f875c5804ee37072ca7231bb6034f04d87ecf89bdc12dcb7efda16e9022", Pod:"calico-kube-controllers-5fc9685798-vqxr2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.14.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali56cee1158e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:06:10.968282 containerd[1899]: 2024-07-02 07:06:10.930 [INFO][5400] k8s.go 608: Cleaning up netns ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:06:10.968282 containerd[1899]: 2024-07-02 07:06:10.930 [INFO][5400] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" iface="eth0" netns="" Jul 2 07:06:10.968282 containerd[1899]: 2024-07-02 07:06:10.930 [INFO][5400] k8s.go 615: Releasing IP address(es) ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:06:10.968282 containerd[1899]: 2024-07-02 07:06:10.930 [INFO][5400] utils.go 188: Calico CNI releasing IP address ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:06:10.968282 containerd[1899]: 2024-07-02 07:06:10.956 [INFO][5406] ipam_plugin.go 411: Releasing address using handleID ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" HandleID="k8s-pod-network.8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Workload="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:06:10.968282 containerd[1899]: 2024-07-02 07:06:10.956 [INFO][5406] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:06:10.968282 containerd[1899]: 2024-07-02 07:06:10.956 [INFO][5406] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:06:10.968282 containerd[1899]: 2024-07-02 07:06:10.963 [WARNING][5406] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" HandleID="k8s-pod-network.8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Workload="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:06:10.968282 containerd[1899]: 2024-07-02 07:06:10.963 [INFO][5406] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" HandleID="k8s-pod-network.8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Workload="ip--172--31--25--147-k8s-calico--kube--controllers--5fc9685798--vqxr2-eth0" Jul 2 07:06:10.968282 containerd[1899]: 2024-07-02 07:06:10.964 [INFO][5406] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:06:10.968282 containerd[1899]: 2024-07-02 07:06:10.966 [INFO][5400] k8s.go 621: Teardown processing complete. ContainerID="8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b" Jul 2 07:06:10.969511 containerd[1899]: time="2024-07-02T07:06:10.968336895Z" level=info msg="TearDown network for sandbox \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\" successfully" Jul 2 07:06:10.972617 containerd[1899]: time="2024-07-02T07:06:10.972563138Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 07:06:10.972816 containerd[1899]: time="2024-07-02T07:06:10.972655231Z" level=info msg="RemovePodSandbox \"8e13a0fb5bc2e67a81a71b92e56a95230013790b2d3a40132a926b91808d3e8b\" returns successfully" Jul 2 07:06:14.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.25.147:22-139.178.89.65:48406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:14.350098 systemd[1]: Started sshd@10-172.31.25.147:22-139.178.89.65:48406.service - OpenSSH per-connection server daemon (139.178.89.65:48406). Jul 2 07:06:14.351479 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:06:14.351537 kernel: audit: type=1130 audit(1719903974.349:337): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.25.147:22-139.178.89.65:48406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:14.549000 audit[5419]: USER_ACCT pid=5419 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:14.549000 audit[5419]: CRED_ACQ pid=5419 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:14.554639 sshd[5419]: Accepted publickey for core from 139.178.89.65 port 48406 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:14.555222 kernel: audit: type=1101 audit(1719903974.549:338): pid=5419 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:14.555300 kernel: audit: type=1103 audit(1719903974.549:339): pid=5419 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:14.555331 kernel: audit: type=1006 audit(1719903974.549:340): pid=5419 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jul 2 07:06:14.549000 audit[5419]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8d542b60 a2=3 a3=7fa183a0e480 items=0 ppid=1 pid=5419 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:14.557093 sshd[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:14.559658 kernel: audit: type=1300 audit(1719903974.549:340): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc8d542b60 a2=3 a3=7fa183a0e480 items=0 ppid=1 pid=5419 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:14.549000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:14.561698 kernel: audit: type=1327 audit(1719903974.549:340): proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:14.566543 systemd-logind[1877]: New session 11 of user core. Jul 2 07:06:14.569888 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 07:06:14.578000 audit[5419]: USER_START pid=5419 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:14.582586 kernel: audit: type=1105 audit(1719903974.578:341): pid=5419 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:14.582000 audit[5422]: CRED_ACQ pid=5422 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:14.585623 kernel: audit: type=1103 audit(1719903974.582:342): pid=5422 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:14.896931 sshd[5419]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:14.904866 kernel: audit: type=1106 audit(1719903974.898:343): pid=5419 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:14.898000 audit[5419]: USER_END pid=5419 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:14.903165 systemd[1]: sshd@10-172.31.25.147:22-139.178.89.65:48406.service: Deactivated successfully. Jul 2 07:06:14.904546 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 07:06:14.898000 audit[5419]: CRED_DISP pid=5419 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:14.912470 kernel: audit: type=1104 audit(1719903974.898:344): pid=5419 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:14.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.25.147:22-139.178.89.65:48406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:14.910633 systemd-logind[1877]: Session 11 logged out. Waiting for processes to exit. Jul 2 07:06:14.913011 systemd-logind[1877]: Removed session 11. Jul 2 07:06:14.926270 systemd[1]: Started sshd@11-172.31.25.147:22-139.178.89.65:48418.service - OpenSSH per-connection server daemon (139.178.89.65:48418). Jul 2 07:06:14.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.25.147:22-139.178.89.65:48418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:15.086000 audit[5433]: USER_ACCT pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:15.087127 sshd[5433]: Accepted publickey for core from 139.178.89.65 port 48418 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:15.087000 audit[5433]: CRED_ACQ pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:15.088000 audit[5433]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd45059750 a2=3 a3=7f8839667480 items=0 ppid=1 pid=5433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:15.088000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:15.089046 sshd[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:15.096219 systemd-logind[1877]: New session 12 of user core. Jul 2 07:06:15.102886 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 07:06:15.112000 audit[5433]: USER_START pid=5433 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:15.114000 audit[5436]: CRED_ACQ pid=5436 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:15.631707 sshd[5433]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:15.633000 audit[5433]: USER_END pid=5433 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:15.633000 audit[5433]: CRED_DISP pid=5433 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:15.636675 systemd[1]: sshd@11-172.31.25.147:22-139.178.89.65:48418.service: Deactivated successfully. Jul 2 07:06:15.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.25.147:22-139.178.89.65:48418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:15.639279 systemd-logind[1877]: Session 12 logged out. Waiting for processes to exit. Jul 2 07:06:15.639368 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 07:06:15.641982 systemd-logind[1877]: Removed session 12. Jul 2 07:06:15.677240 systemd[1]: Started sshd@12-172.31.25.147:22-139.178.89.65:48420.service - OpenSSH per-connection server daemon (139.178.89.65:48420). Jul 2 07:06:15.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.25.147:22-139.178.89.65:48420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:15.848000 audit[5444]: USER_ACCT pid=5444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:15.849372 sshd[5444]: Accepted publickey for core from 139.178.89.65 port 48420 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:15.850000 audit[5444]: CRED_ACQ pid=5444 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:15.850000 audit[5444]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd4cd49f0 a2=3 a3=7f2886c49480 items=0 ppid=1 pid=5444 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:15.850000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:15.851595 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:15.870712 systemd-logind[1877]: New session 13 of user core. Jul 2 07:06:15.873050 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 07:06:15.884000 audit[5444]: USER_START pid=5444 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:15.887000 audit[5447]: CRED_ACQ pid=5447 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:16.104631 sshd[5444]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:16.105000 audit[5444]: USER_END pid=5444 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:16.105000 audit[5444]: CRED_DISP pid=5444 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:16.108473 systemd[1]: sshd@12-172.31.25.147:22-139.178.89.65:48420.service: Deactivated successfully. Jul 2 07:06:16.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.25.147:22-139.178.89.65:48420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:16.110958 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 07:06:16.111338 systemd-logind[1877]: Session 13 logged out. Waiting for processes to exit. Jul 2 07:06:16.114252 systemd-logind[1877]: Removed session 13. Jul 2 07:06:21.137598 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 2 07:06:21.137752 kernel: audit: type=1130 audit(1719903981.134:364): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.25.147:22-139.178.89.65:54150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:21.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.25.147:22-139.178.89.65:54150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:21.135110 systemd[1]: Started sshd@13-172.31.25.147:22-139.178.89.65:54150.service - OpenSSH per-connection server daemon (139.178.89.65:54150). Jul 2 07:06:21.306463 kernel: audit: type=1101 audit(1719903981.298:365): pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:21.306603 kernel: audit: type=1103 audit(1719903981.300:366): pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:21.306649 kernel: audit: type=1006 audit(1719903981.300:367): pid=5463 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 2 07:06:21.298000 audit[5463]: USER_ACCT pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:21.300000 audit[5463]: CRED_ACQ pid=5463 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:21.300000 audit[5463]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc23a78610 a2=3 a3=7f481bb69480 items=0 ppid=1 pid=5463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:21.318068 kernel: audit: type=1300 audit(1719903981.300:367): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc23a78610 a2=3 a3=7f481bb69480 items=0 ppid=1 pid=5463 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:21.301611 sshd[5463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:21.320875 kernel: audit: type=1327 audit(1719903981.300:367): proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:21.300000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:21.320973 sshd[5463]: Accepted publickey for core from 139.178.89.65 port 54150 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:21.318096 systemd-logind[1877]: New session 14 of user core. Jul 2 07:06:21.320937 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 07:06:21.329000 audit[5463]: USER_START pid=5463 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:21.331000 audit[5466]: CRED_ACQ pid=5466 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:21.335468 kernel: audit: type=1105 audit(1719903981.329:368): pid=5463 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:21.335540 kernel: audit: type=1103 audit(1719903981.331:369): pid=5466 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:21.520468 sshd[5463]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:21.523000 audit[5463]: USER_END pid=5463 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:21.524000 audit[5463]: CRED_DISP pid=5463 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:21.526683 systemd[1]: sshd@13-172.31.25.147:22-139.178.89.65:54150.service: Deactivated successfully. Jul 2 07:06:21.528315 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 07:06:21.530017 kernel: audit: type=1106 audit(1719903981.523:370): pid=5463 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:21.530105 kernel: audit: type=1104 audit(1719903981.524:371): pid=5463 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:21.531004 systemd-logind[1877]: Session 14 logged out. Waiting for processes to exit. Jul 2 07:06:21.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.25.147:22-139.178.89.65:54150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:21.532520 systemd-logind[1877]: Removed session 14. Jul 2 07:06:25.830147 systemd[1]: run-containerd-runc-k8s.io-eeabeae56c06cd36e75a03fd512ffe6d20c5c11a428427a00ad9f668aabcf8cf-runc.L2ExUV.mount: Deactivated successfully. Jul 2 07:06:26.550919 systemd[1]: Started sshd@14-172.31.25.147:22-139.178.89.65:54166.service - OpenSSH per-connection server daemon (139.178.89.65:54166). Jul 2 07:06:26.555420 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:06:26.555583 kernel: audit: type=1130 audit(1719903986.551:373): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.25.147:22-139.178.89.65:54166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:26.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.25.147:22-139.178.89.65:54166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:26.721672 kernel: audit: type=1101 audit(1719903986.711:374): pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:26.721841 kernel: audit: type=1103 audit(1719903986.712:375): pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:26.721877 kernel: audit: type=1006 audit(1719903986.712:376): pid=5501 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 2 07:06:26.721906 kernel: audit: type=1300 audit(1719903986.712:376): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc39b0d9c0 a2=3 a3=7f95c60c9480 items=0 ppid=1 pid=5501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:26.711000 audit[5501]: USER_ACCT pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:26.712000 audit[5501]: CRED_ACQ pid=5501 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:26.712000 audit[5501]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc39b0d9c0 a2=3 a3=7f95c60c9480 items=0 ppid=1 pid=5501 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:26.713788 sshd[5501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:26.723999 kernel: audit: type=1327 audit(1719903986.712:376): proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:26.712000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:26.724086 sshd[5501]: Accepted publickey for core from 139.178.89.65 port 54166 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:26.728936 systemd-logind[1877]: New session 15 of user core. Jul 2 07:06:26.738882 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 07:06:26.752000 audit[5501]: USER_START pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:26.757704 kernel: audit: type=1105 audit(1719903986.752:377): pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:26.757000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:26.762582 kernel: audit: type=1103 audit(1719903986.757:378): pid=5504 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:26.981408 sshd[5501]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:26.982000 audit[5501]: USER_END pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:26.984000 audit[5501]: CRED_DISP pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:26.990320 kernel: audit: type=1106 audit(1719903986.982:379): pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:26.990494 kernel: audit: type=1104 audit(1719903986.984:380): pid=5501 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:26.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.25.147:22-139.178.89.65:54166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:26.988034 systemd[1]: sshd@14-172.31.25.147:22-139.178.89.65:54166.service: Deactivated successfully. Jul 2 07:06:26.989580 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 07:06:26.991465 systemd-logind[1877]: Session 15 logged out. Waiting for processes to exit. Jul 2 07:06:26.995595 systemd-logind[1877]: Removed session 15. Jul 2 07:06:32.030244 systemd[1]: Started sshd@15-172.31.25.147:22-139.178.89.65:54198.service - OpenSSH per-connection server daemon (139.178.89.65:54198). Jul 2 07:06:32.036161 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:06:32.036252 kernel: audit: type=1130 audit(1719903992.028:382): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.25.147:22-139.178.89.65:54198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:32.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.25.147:22-139.178.89.65:54198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:32.218075 kernel: audit: type=1101 audit(1719903992.207:383): pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:32.218224 kernel: audit: type=1103 audit(1719903992.208:384): pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:32.207000 audit[5521]: USER_ACCT pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:32.208000 audit[5521]: CRED_ACQ pid=5521 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:32.218437 sshd[5521]: Accepted publickey for core from 139.178.89.65 port 54198 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:32.211496 sshd[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:32.227600 kernel: audit: type=1006 audit(1719903992.210:385): pid=5521 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 2 07:06:32.227726 kernel: audit: type=1300 audit(1719903992.210:385): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc5c21060 a2=3 a3=7fea866dc480 items=0 ppid=1 pid=5521 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:32.210000 audit[5521]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc5c21060 a2=3 a3=7fea866dc480 items=0 ppid=1 pid=5521 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:32.229342 kernel: audit: type=1327 audit(1719903992.210:385): proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:32.210000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:32.234708 systemd-logind[1877]: New session 16 of user core. Jul 2 07:06:32.236885 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 07:06:32.251000 audit[5521]: USER_START pid=5521 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:32.257745 kernel: audit: type=1105 audit(1719903992.251:386): pid=5521 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:32.257852 kernel: audit: type=1103 audit(1719903992.256:387): pid=5529 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:32.256000 audit[5529]: CRED_ACQ pid=5529 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:32.541277 sshd[5521]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:32.541000 audit[5521]: USER_END pid=5521 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:32.546696 kernel: audit: type=1106 audit(1719903992.541:388): pid=5521 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:32.545000 audit[5521]: CRED_DISP pid=5521 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:32.548723 systemd[1]: sshd@15-172.31.25.147:22-139.178.89.65:54198.service: Deactivated successfully. Jul 2 07:06:32.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.25.147:22-139.178.89.65:54198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:32.549590 kernel: audit: type=1104 audit(1719903992.545:389): pid=5521 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:32.549901 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 07:06:32.551859 systemd-logind[1877]: Session 16 logged out. Waiting for processes to exit. Jul 2 07:06:32.553036 systemd-logind[1877]: Removed session 16. Jul 2 07:06:33.086761 kubelet[3060]: I0702 07:06:33.086717 3060 topology_manager.go:215] "Topology Admit Handler" podUID="1775b9ec-4d50-4063-8645-5d7e1c0372ad" podNamespace="calico-apiserver" podName="calico-apiserver-f86c66868-62ktb" Jul 2 07:06:33.187562 kubelet[3060]: I0702 07:06:33.187515 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1775b9ec-4d50-4063-8645-5d7e1c0372ad-calico-apiserver-certs\") pod \"calico-apiserver-f86c66868-62ktb\" (UID: \"1775b9ec-4d50-4063-8645-5d7e1c0372ad\") " pod="calico-apiserver/calico-apiserver-f86c66868-62ktb" Jul 2 07:06:33.187831 kubelet[3060]: I0702 07:06:33.187815 3060 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69snx\" (UniqueName: \"kubernetes.io/projected/1775b9ec-4d50-4063-8645-5d7e1c0372ad-kube-api-access-69snx\") pod \"calico-apiserver-f86c66868-62ktb\" (UID: \"1775b9ec-4d50-4063-8645-5d7e1c0372ad\") " pod="calico-apiserver/calico-apiserver-f86c66868-62ktb" Jul 2 07:06:33.296762 kubelet[3060]: E0702 07:06:33.296716 3060 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 07:06:33.300399 kubelet[3060]: E0702 07:06:33.300364 3060 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1775b9ec-4d50-4063-8645-5d7e1c0372ad-calico-apiserver-certs podName:1775b9ec-4d50-4063-8645-5d7e1c0372ad nodeName:}" failed. No retries permitted until 2024-07-02 07:06:33.796817172 +0000 UTC m=+84.161890750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1775b9ec-4d50-4063-8645-5d7e1c0372ad-calico-apiserver-certs") pod "calico-apiserver-f86c66868-62ktb" (UID: "1775b9ec-4d50-4063-8645-5d7e1c0372ad") : secret "calico-apiserver-certs" not found Jul 2 07:06:33.329000 audit[5540]: NETFILTER_CFG table=filter:113 family=2 entries=9 op=nft_register_rule pid=5540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:33.329000 audit[5540]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7fff3f8d9d40 a2=0 a3=7fff3f8d9d2c items=0 ppid=3296 pid=5540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:33.329000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:33.331000 audit[5540]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=5540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:33.331000 audit[5540]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff3f8d9d40 a2=0 a3=7fff3f8d9d2c items=0 ppid=3296 pid=5540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:33.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:33.348000 audit[5543]: NETFILTER_CFG table=filter:115 family=2 entries=10 op=nft_register_rule pid=5543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:33.348000 audit[5543]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe99550b10 a2=0 a3=7ffe99550afc items=0 ppid=3296 pid=5543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:33.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:33.350000 audit[5543]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=5543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:33.350000 audit[5543]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe99550b10 a2=0 a3=7ffe99550afc items=0 ppid=3296 pid=5543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:33.350000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:34.013973 containerd[1899]: time="2024-07-02T07:06:34.013921189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f86c66868-62ktb,Uid:1775b9ec-4d50-4063-8645-5d7e1c0372ad,Namespace:calico-apiserver,Attempt:0,}" Jul 2 07:06:34.291986 systemd-networkd[1583]: calicc5788b6cc0: Link UP Jul 2 07:06:34.294108 (udev-worker)[5564]: Network interface NamePolicy= disabled on kernel command line. Jul 2 07:06:34.295824 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:06:34.295881 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calicc5788b6cc0: link becomes ready Jul 2 07:06:34.296046 systemd-networkd[1583]: calicc5788b6cc0: Gained carrier Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.157 [INFO][5545] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-eth0 calico-apiserver-f86c66868- calico-apiserver 1775b9ec-4d50-4063-8645-5d7e1c0372ad 998 0 2024-07-02 07:06:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f86c66868 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-25-147 calico-apiserver-f86c66868-62ktb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicc5788b6cc0 [] []}} ContainerID="9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" Namespace="calico-apiserver" Pod="calico-apiserver-f86c66868-62ktb" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-" Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.158 [INFO][5545] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" Namespace="calico-apiserver" Pod="calico-apiserver-f86c66868-62ktb" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-eth0" Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.222 [INFO][5556] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" HandleID="k8s-pod-network.9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" Workload="ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-eth0" Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.233 [INFO][5556] ipam_plugin.go 264: Auto assigning IP ContainerID="9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" HandleID="k8s-pod-network.9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" Workload="ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002918b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-25-147", "pod":"calico-apiserver-f86c66868-62ktb", "timestamp":"2024-07-02 07:06:34.222864468 +0000 UTC"}, Hostname:"ip-172-31-25-147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.233 [INFO][5556] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.233 [INFO][5556] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.234 [INFO][5556] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-25-147' Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.236 [INFO][5556] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" host="ip-172-31-25-147" Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.240 [INFO][5556] ipam.go 372: Looking up existing affinities for host host="ip-172-31-25-147" Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.251 [INFO][5556] ipam.go 489: Trying affinity for 192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.254 [INFO][5556] ipam.go 155: Attempting to load block cidr=192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.258 [INFO][5556] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.0/26 host="ip-172-31-25-147" Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.258 [INFO][5556] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.0/26 handle="k8s-pod-network.9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" host="ip-172-31-25-147" Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.263 [INFO][5556] ipam.go 1685: Creating new handle: k8s-pod-network.9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.272 [INFO][5556] ipam.go 1203: Writing block in order to claim IPs block=192.168.14.0/26 handle="k8s-pod-network.9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" host="ip-172-31-25-147" Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.280 [INFO][5556] ipam.go 1216: Successfully claimed IPs: [192.168.14.5/26] block=192.168.14.0/26 handle="k8s-pod-network.9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" host="ip-172-31-25-147" Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.280 [INFO][5556] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.5/26] handle="k8s-pod-network.9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" host="ip-172-31-25-147" Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.280 [INFO][5556] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 07:06:34.328879 containerd[1899]: 2024-07-02 07:06:34.280 [INFO][5556] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.14.5/26] IPv6=[] ContainerID="9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" HandleID="k8s-pod-network.9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" Workload="ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-eth0" Jul 2 07:06:34.329948 containerd[1899]: 2024-07-02 07:06:34.282 [INFO][5545] k8s.go 386: Populated endpoint ContainerID="9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" Namespace="calico-apiserver" Pod="calico-apiserver-f86c66868-62ktb" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-eth0", GenerateName:"calico-apiserver-f86c66868-", Namespace:"calico-apiserver", SelfLink:"", UID:"1775b9ec-4d50-4063-8645-5d7e1c0372ad", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 6, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f86c66868", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"", Pod:"calico-apiserver-f86c66868-62ktb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc5788b6cc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:06:34.329948 containerd[1899]: 2024-07-02 07:06:34.282 [INFO][5545] k8s.go 387: Calico CNI using IPs: [192.168.14.5/32] ContainerID="9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" Namespace="calico-apiserver" Pod="calico-apiserver-f86c66868-62ktb" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-eth0" Jul 2 07:06:34.329948 containerd[1899]: 2024-07-02 07:06:34.282 [INFO][5545] dataplane_linux.go 68: Setting the host side veth name to calicc5788b6cc0 ContainerID="9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" Namespace="calico-apiserver" Pod="calico-apiserver-f86c66868-62ktb" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-eth0" Jul 2 07:06:34.329948 containerd[1899]: 2024-07-02 07:06:34.295 [INFO][5545] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" Namespace="calico-apiserver" Pod="calico-apiserver-f86c66868-62ktb" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-eth0" Jul 2 07:06:34.329948 containerd[1899]: 2024-07-02 07:06:34.296 [INFO][5545] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" Namespace="calico-apiserver" Pod="calico-apiserver-f86c66868-62ktb" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-eth0", GenerateName:"calico-apiserver-f86c66868-", Namespace:"calico-apiserver", SelfLink:"", UID:"1775b9ec-4d50-4063-8645-5d7e1c0372ad", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 7, 6, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f86c66868", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-25-147", ContainerID:"9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c", Pod:"calico-apiserver-f86c66868-62ktb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.14.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc5788b6cc0", MAC:"8a:fd:9f:1a:86:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 07:06:34.329948 containerd[1899]: 2024-07-02 07:06:34.321 [INFO][5545] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c" Namespace="calico-apiserver" Pod="calico-apiserver-f86c66868-62ktb" WorkloadEndpoint="ip--172--31--25--147-k8s-calico--apiserver--f86c66868--62ktb-eth0" Jul 2 07:06:34.361000 audit[5581]: NETFILTER_CFG table=filter:117 family=2 entries=55 op=nft_register_chain pid=5581 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 07:06:34.361000 audit[5581]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7ffe635ece20 a2=0 a3=7ffe635ece0c items=0 ppid=4247 pid=5581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:34.361000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 07:06:34.366719 containerd[1899]: time="2024-07-02T07:06:34.366524457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:06:34.366719 containerd[1899]: time="2024-07-02T07:06:34.366672350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:06:34.366951 containerd[1899]: time="2024-07-02T07:06:34.366700946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:06:34.366951 containerd[1899]: time="2024-07-02T07:06:34.366721384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:06:34.424316 systemd[1]: run-containerd-runc-k8s.io-9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c-runc.llmbRN.mount: Deactivated successfully. Jul 2 07:06:34.490638 containerd[1899]: time="2024-07-02T07:06:34.490528592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f86c66868-62ktb,Uid:1775b9ec-4d50-4063-8645-5d7e1c0372ad,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c\"" Jul 2 07:06:34.494723 containerd[1899]: time="2024-07-02T07:06:34.494650691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 07:06:35.535701 systemd-networkd[1583]: calicc5788b6cc0: Gained IPv6LL Jul 2 07:06:37.270276 containerd[1899]: time="2024-07-02T07:06:37.270213107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:06:37.276718 containerd[1899]: time="2024-07-02T07:06:37.276650284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 07:06:37.279851 containerd[1899]: time="2024-07-02T07:06:37.279804486Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:06:37.285157 containerd[1899]: time="2024-07-02T07:06:37.285085055Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:06:37.290684 containerd[1899]: time="2024-07-02T07:06:37.290639340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:06:37.291544 containerd[1899]: time="2024-07-02T07:06:37.291498893Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 2.79676414s" Jul 2 07:06:37.291663 containerd[1899]: time="2024-07-02T07:06:37.291571075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 07:06:37.295177 containerd[1899]: time="2024-07-02T07:06:37.295110992Z" level=info msg="CreateContainer within sandbox \"9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 07:06:37.321150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3298205451.mount: Deactivated successfully. Jul 2 07:06:37.323846 containerd[1899]: time="2024-07-02T07:06:37.323800744Z" level=info msg="CreateContainer within sandbox \"9a04c49265bc0e43e75c0f94d2ca6432fd23b507bc165d37a3570e3406a0060c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dd05696d16798a7cfde4eb861976e512cd0802396de04695b7c275e4ed7de604\"" Jul 2 07:06:37.326867 containerd[1899]: time="2024-07-02T07:06:37.326818480Z" level=info msg="StartContainer for \"dd05696d16798a7cfde4eb861976e512cd0802396de04695b7c275e4ed7de604\"" Jul 2 07:06:37.384501 systemd[1]: run-containerd-runc-k8s.io-dd05696d16798a7cfde4eb861976e512cd0802396de04695b7c275e4ed7de604-runc.iTEzKw.mount: Deactivated successfully. Jul 2 07:06:37.449133 containerd[1899]: time="2024-07-02T07:06:37.449080995Z" level=info msg="StartContainer for \"dd05696d16798a7cfde4eb861976e512cd0802396de04695b7c275e4ed7de604\" returns successfully" Jul 2 07:06:37.571088 systemd[1]: Started sshd@16-172.31.25.147:22-139.178.89.65:54214.service - OpenSSH per-connection server daemon (139.178.89.65:54214). Jul 2 07:06:37.574713 kernel: kauditd_printk_skb: 16 callbacks suppressed Jul 2 07:06:37.574832 kernel: audit: type=1130 audit(1719903997.569:396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.25.147:22-139.178.89.65:54214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:37.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.25.147:22-139.178.89.65:54214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:37.742833 kernel: audit: type=1325 audit(1719903997.735:397): table=filter:118 family=2 entries=10 op=nft_register_rule pid=5669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:37.742966 kernel: audit: type=1300 audit(1719903997.735:397): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe037c28d0 a2=0 a3=7ffe037c28bc items=0 ppid=3296 pid=5669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:37.735000 audit[5669]: NETFILTER_CFG table=filter:118 family=2 entries=10 op=nft_register_rule pid=5669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:37.735000 audit[5669]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe037c28d0 a2=0 a3=7ffe037c28bc items=0 ppid=3296 pid=5669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:37.744851 kernel: audit: type=1327 audit(1719903997.735:397): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:37.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:37.740000 audit[5669]: NETFILTER_CFG table=nat:119 family=2 entries=20 op=nft_register_rule pid=5669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:37.753623 kernel: audit: type=1325 audit(1719903997.740:398): table=nat:119 family=2 entries=20 op=nft_register_rule pid=5669 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:37.753726 kernel: audit: type=1300 audit(1719903997.740:398): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe037c28d0 a2=0 a3=7ffe037c28bc items=0 ppid=3296 pid=5669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:37.740000 audit[5669]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe037c28d0 a2=0 a3=7ffe037c28bc items=0 ppid=3296 pid=5669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:37.740000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:37.762573 kernel: audit: type=1327 audit(1719903997.740:398): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:37.819360 sshd[5666]: Accepted publickey for core from 139.178.89.65 port 54214 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:37.822787 kernel: audit: type=1101 audit(1719903997.816:399): pid=5666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:37.816000 audit[5666]: USER_ACCT pid=5666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:37.822000 audit[5666]: CRED_ACQ pid=5666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:37.828841 kernel: audit: type=1103 audit(1719903997.822:400): pid=5666 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:37.828938 kernel: audit: type=1006 audit(1719903997.822:401): pid=5666 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jul 2 07:06:37.822000 audit[5666]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdab5b31e0 a2=3 a3=7f2043816480 items=0 ppid=1 pid=5666 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:37.822000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:37.830066 sshd[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:37.841542 systemd-logind[1877]: New session 17 of user core. Jul 2 07:06:37.846976 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 07:06:37.877000 audit[5666]: USER_START pid=5666 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:37.879000 audit[5671]: CRED_ACQ pid=5671 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:38.722951 sshd[5666]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:38.726000 audit[5666]: USER_END pid=5666 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:38.727000 audit[5666]: CRED_DISP pid=5666 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:38.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.25.147:22-139.178.89.65:54214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:38.730979 systemd[1]: sshd@16-172.31.25.147:22-139.178.89.65:54214.service: Deactivated successfully. Jul 2 07:06:38.732253 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 07:06:38.735014 systemd-logind[1877]: Session 17 logged out. Waiting for processes to exit. Jul 2 07:06:38.737394 systemd-logind[1877]: Removed session 17. Jul 2 07:06:38.762294 systemd[1]: Started sshd@17-172.31.25.147:22-139.178.89.65:53456.service - OpenSSH per-connection server daemon (139.178.89.65:53456). Jul 2 07:06:38.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.25.147:22-139.178.89.65:53456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:38.882333 kubelet[3060]: I0702 07:06:38.882245 3060 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f86c66868-62ktb" podStartSLOduration=3.082942353 podCreationTimestamp="2024-07-02 07:06:33 +0000 UTC" firstStartedPulling="2024-07-02 07:06:34.492651746 +0000 UTC m=+84.857725318" lastFinishedPulling="2024-07-02 07:06:37.291808606 +0000 UTC m=+87.656882179" observedRunningTime="2024-07-02 07:06:37.704742113 +0000 UTC m=+88.069815692" watchObservedRunningTime="2024-07-02 07:06:38.882099214 +0000 UTC m=+89.247172797" Jul 2 07:06:38.921000 audit[5704]: NETFILTER_CFG table=filter:120 family=2 entries=10 op=nft_register_rule pid=5704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:38.921000 audit[5704]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc6f7128e0 a2=0 a3=7ffc6f7128cc items=0 ppid=3296 pid=5704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:38.921000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:38.924000 audit[5704]: NETFILTER_CFG table=nat:121 family=2 entries=20 op=nft_register_rule pid=5704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:38.924000 audit[5704]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc6f7128e0 a2=0 a3=7ffc6f7128cc items=0 ppid=3296 pid=5704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:38.924000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:38.943000 audit[5706]: NETFILTER_CFG table=filter:122 family=2 entries=9 op=nft_register_rule pid=5706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:38.943000 audit[5706]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdecbc4210 a2=0 a3=7ffdecbc41fc items=0 ppid=3296 pid=5706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:38.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:38.949000 audit[5706]: NETFILTER_CFG table=nat:123 family=2 entries=27 op=nft_register_chain pid=5706 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:38.949000 audit[5706]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffdecbc4210 a2=0 a3=7ffdecbc41fc items=0 ppid=3296 pid=5706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:38.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:38.972631 sshd[5701]: Accepted publickey for core from 139.178.89.65 port 53456 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:38.970000 audit[5701]: USER_ACCT pid=5701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:38.974000 audit[5701]: CRED_ACQ pid=5701 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:38.974000 audit[5701]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea7b9ba60 a2=3 a3=7fc63f3c2480 items=0 ppid=1 pid=5701 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:38.974000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:38.977024 sshd[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:38.984639 systemd-logind[1877]: New session 18 of user core. Jul 2 07:06:38.987864 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 07:06:38.995000 audit[5701]: USER_START pid=5701 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:38.998000 audit[5708]: CRED_ACQ pid=5708 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:39.591037 sshd[5701]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:39.598000 audit[5701]: USER_END pid=5701 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:39.598000 audit[5701]: CRED_DISP pid=5701 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:39.602078 systemd[1]: sshd@17-172.31.25.147:22-139.178.89.65:53456.service: Deactivated successfully. Jul 2 07:06:39.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.25.147:22-139.178.89.65:53456 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:39.603606 systemd-logind[1877]: Session 18 logged out. Waiting for processes to exit. Jul 2 07:06:39.604283 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 07:06:39.605651 systemd-logind[1877]: Removed session 18. Jul 2 07:06:39.618082 systemd[1]: Started sshd@18-172.31.25.147:22-139.178.89.65:53470.service - OpenSSH per-connection server daemon (139.178.89.65:53470). Jul 2 07:06:39.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.25.147:22-139.178.89.65:53470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:39.782000 audit[5717]: USER_ACCT pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:39.784397 sshd[5717]: Accepted publickey for core from 139.178.89.65 port 53470 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:39.784000 audit[5717]: CRED_ACQ pid=5717 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:39.784000 audit[5717]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6eedd250 a2=3 a3=7f6105742480 items=0 ppid=1 pid=5717 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:39.784000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:39.786744 sshd[5717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:39.794713 systemd-logind[1877]: New session 19 of user core. Jul 2 07:06:39.799021 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 07:06:39.810000 audit[5717]: USER_START pid=5717 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:39.811000 audit[5720]: CRED_ACQ pid=5720 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:40.629426 systemd[1]: run-containerd-runc-k8s.io-eeabeae56c06cd36e75a03fd512ffe6d20c5c11a428427a00ad9f668aabcf8cf-runc.761Tc8.mount: Deactivated successfully. Jul 2 07:06:41.449000 audit[5749]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=5749 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:41.449000 audit[5749]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffe11543790 a2=0 a3=7ffe1154377c items=0 ppid=3296 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:41.449000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:41.451000 audit[5749]: NETFILTER_CFG table=nat:125 family=2 entries=22 op=nft_register_rule pid=5749 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:41.451000 audit[5749]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffe11543790 a2=0 a3=0 items=0 ppid=3296 pid=5749 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:41.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:41.481000 audit[5751]: NETFILTER_CFG table=filter:126 family=2 entries=32 op=nft_register_rule pid=5751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:41.481000 audit[5751]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffff9db9480 a2=0 a3=7ffff9db946c items=0 ppid=3296 pid=5751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:41.481000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:41.482000 audit[5751]: NETFILTER_CFG table=nat:127 family=2 entries=22 op=nft_register_rule pid=5751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:41.482000 audit[5751]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffff9db9480 a2=0 a3=0 items=0 ppid=3296 pid=5751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:41.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:41.497837 sshd[5717]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:41.499000 audit[5717]: USER_END pid=5717 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:41.501000 audit[5717]: CRED_DISP pid=5717 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:41.508701 systemd-logind[1877]: Session 19 logged out. Waiting for processes to exit. Jul 2 07:06:41.509097 systemd[1]: sshd@18-172.31.25.147:22-139.178.89.65:53470.service: Deactivated successfully. Jul 2 07:06:41.510357 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 07:06:41.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.25.147:22-139.178.89.65:53470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:41.512442 systemd-logind[1877]: Removed session 19. Jul 2 07:06:41.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.25.147:22-139.178.89.65:53474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:41.529245 systemd[1]: Started sshd@19-172.31.25.147:22-139.178.89.65:53474.service - OpenSSH per-connection server daemon (139.178.89.65:53474). Jul 2 07:06:41.710000 audit[5754]: USER_ACCT pid=5754 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:41.713604 sshd[5754]: Accepted publickey for core from 139.178.89.65 port 53474 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:41.713000 audit[5754]: CRED_ACQ pid=5754 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:41.713000 audit[5754]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe55ba3260 a2=3 a3=7f91a370b480 items=0 ppid=1 pid=5754 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:41.713000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:41.717962 sshd[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:41.732260 systemd-logind[1877]: New session 20 of user core. Jul 2 07:06:41.736949 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 07:06:41.743000 audit[5754]: USER_START pid=5754 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:41.745000 audit[5757]: CRED_ACQ pid=5757 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:42.856156 sshd[5754]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:42.864011 kernel: kauditd_printk_skb: 61 callbacks suppressed Jul 2 07:06:42.864135 kernel: audit: type=1106 audit(1719904002.857:439): pid=5754 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:42.857000 audit[5754]: USER_END pid=5754 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:42.857000 audit[5754]: CRED_DISP pid=5754 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:42.864397 systemd-logind[1877]: Session 20 logged out. Waiting for processes to exit. Jul 2 07:06:42.869268 kernel: audit: type=1104 audit(1719904002.857:440): pid=5754 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:42.869364 kernel: audit: type=1131 audit(1719904002.866:441): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.25.147:22-139.178.89.65:53474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:42.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.25.147:22-139.178.89.65:53474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:42.866921 systemd[1]: sshd@19-172.31.25.147:22-139.178.89.65:53474.service: Deactivated successfully. Jul 2 07:06:42.868465 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 07:06:42.871233 systemd-logind[1877]: Removed session 20. Jul 2 07:06:42.886207 systemd[1]: Started sshd@20-172.31.25.147:22-139.178.89.65:53490.service - OpenSSH per-connection server daemon (139.178.89.65:53490). Jul 2 07:06:42.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.25.147:22-139.178.89.65:53490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:42.890613 kernel: audit: type=1130 audit(1719904002.885:442): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.25.147:22-139.178.89.65:53490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:43.044000 audit[5770]: USER_ACCT pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:43.049457 sshd[5770]: Accepted publickey for core from 139.178.89.65 port 53490 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:43.050337 kernel: audit: type=1101 audit(1719904003.044:443): pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:43.048000 audit[5770]: CRED_ACQ pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:43.052038 sshd[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:43.056895 kernel: audit: type=1103 audit(1719904003.048:444): pid=5770 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:43.057062 kernel: audit: type=1006 audit(1719904003.050:445): pid=5770 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jul 2 07:06:43.057138 kernel: audit: type=1300 audit(1719904003.050:445): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbc0514f0 a2=3 a3=7f0d41290480 items=0 ppid=1 pid=5770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:43.050000 audit[5770]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbc0514f0 a2=3 a3=7f0d41290480 items=0 ppid=1 pid=5770 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:43.060880 kernel: audit: type=1327 audit(1719904003.050:445): proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:43.050000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:43.067419 systemd-logind[1877]: New session 21 of user core. Jul 2 07:06:43.071178 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 07:06:43.078000 audit[5770]: USER_START pid=5770 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:43.082000 audit[5773]: CRED_ACQ pid=5773 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:43.084698 kernel: audit: type=1105 audit(1719904003.078:446): pid=5770 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:43.348602 sshd[5770]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:43.348000 audit[5770]: USER_END pid=5770 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:43.348000 audit[5770]: CRED_DISP pid=5770 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:43.352527 systemd[1]: sshd@20-172.31.25.147:22-139.178.89.65:53490.service: Deactivated successfully. Jul 2 07:06:43.353536 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 07:06:43.354121 systemd-logind[1877]: Session 21 logged out. Waiting for processes to exit. Jul 2 07:06:43.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.25.147:22-139.178.89.65:53490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:43.355161 systemd-logind[1877]: Removed session 21. Jul 2 07:06:48.382117 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 2 07:06:48.382250 kernel: audit: type=1130 audit(1719904008.376:451): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.25.147:22-139.178.89.65:47522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:48.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.25.147:22-139.178.89.65:47522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:48.377770 systemd[1]: Started sshd@21-172.31.25.147:22-139.178.89.65:47522.service - OpenSSH per-connection server daemon (139.178.89.65:47522). Jul 2 07:06:48.542021 kernel: audit: type=1101 audit(1719904008.533:452): pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:48.542731 kernel: audit: type=1103 audit(1719904008.533:453): pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:48.533000 audit[5785]: USER_ACCT pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:48.533000 audit[5785]: CRED_ACQ pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:48.538298 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:48.543859 sshd[5785]: Accepted publickey for core from 139.178.89.65 port 47522 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:48.550136 kernel: audit: type=1006 audit(1719904008.533:454): pid=5785 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 2 07:06:48.550243 kernel: audit: type=1300 audit(1719904008.533:454): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe36ecbcf0 a2=3 a3=7fdf58e6b480 items=0 ppid=1 pid=5785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:48.533000 audit[5785]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe36ecbcf0 a2=3 a3=7fdf58e6b480 items=0 ppid=1 pid=5785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:48.533000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:48.559380 kernel: audit: type=1327 audit(1719904008.533:454): proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:48.571515 systemd-logind[1877]: New session 22 of user core. Jul 2 07:06:48.579095 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 07:06:48.587000 audit[5785]: USER_START pid=5785 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:48.592693 kernel: audit: type=1105 audit(1719904008.587:455): pid=5785 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:48.592746 kernel: audit: type=1103 audit(1719904008.587:456): pid=5788 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:48.587000 audit[5788]: CRED_ACQ pid=5788 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:48.823231 sshd[5785]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:48.825000 audit[5785]: USER_END pid=5785 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:48.829078 systemd[1]: sshd@21-172.31.25.147:22-139.178.89.65:47522.service: Deactivated successfully. Jul 2 07:06:48.832592 kernel: audit: type=1106 audit(1719904008.825:457): pid=5785 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:48.832431 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 07:06:48.825000 audit[5785]: CRED_DISP pid=5785 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:48.839047 kernel: audit: type=1104 audit(1719904008.825:458): pid=5785 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:48.838387 systemd-logind[1877]: Session 22 logged out. Waiting for processes to exit. Jul 2 07:06:48.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.25.147:22-139.178.89.65:47522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:48.841317 systemd-logind[1877]: Removed session 22. Jul 2 07:06:49.006000 audit[5799]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:49.006000 audit[5799]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffff1789050 a2=0 a3=7ffff178903c items=0 ppid=3296 pid=5799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:49.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:49.012000 audit[5799]: NETFILTER_CFG table=nat:129 family=2 entries=106 op=nft_register_chain pid=5799 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 07:06:49.012000 audit[5799]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffff1789050 a2=0 a3=7ffff178903c items=0 ppid=3296 pid=5799 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:49.012000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 07:06:53.863361 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 2 07:06:53.863504 kernel: audit: type=1130 audit(1719904013.860:462): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.25.147:22-139.178.89.65:47528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:53.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.25.147:22-139.178.89.65:47528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:53.861951 systemd[1]: Started sshd@22-172.31.25.147:22-139.178.89.65:47528.service - OpenSSH per-connection server daemon (139.178.89.65:47528). Jul 2 07:06:54.031000 audit[5808]: USER_ACCT pid=5808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:54.037608 sshd[5808]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:54.034000 audit[5808]: CRED_ACQ pid=5808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:54.039917 sshd[5808]: Accepted publickey for core from 139.178.89.65 port 47528 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:54.049692 kernel: audit: type=1101 audit(1719904014.031:463): pid=5808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:54.049785 kernel: audit: type=1103 audit(1719904014.034:464): pid=5808 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:54.049844 kernel: audit: type=1006 audit(1719904014.036:465): pid=5808 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 2 07:06:54.059115 kernel: audit: type=1300 audit(1719904014.036:465): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb2588270 a2=3 a3=7fbd186a6480 items=0 ppid=1 pid=5808 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:54.036000 audit[5808]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcb2588270 a2=3 a3=7fbd186a6480 items=0 ppid=1 pid=5808 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:54.067893 kernel: audit: type=1327 audit(1719904014.036:465): proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:54.036000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:54.067816 systemd-logind[1877]: New session 23 of user core. Jul 2 07:06:54.073956 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 07:06:54.084000 audit[5808]: USER_START pid=5808 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:54.090000 audit[5811]: CRED_ACQ pid=5811 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:54.096193 kernel: audit: type=1105 audit(1719904014.084:466): pid=5808 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:54.096306 kernel: audit: type=1103 audit(1719904014.090:467): pid=5811 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:54.297275 sshd[5808]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:54.297000 audit[5808]: USER_END pid=5808 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:54.303685 kernel: audit: type=1106 audit(1719904014.297:468): pid=5808 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:54.298000 audit[5808]: CRED_DISP pid=5808 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:54.306762 kernel: audit: type=1104 audit(1719904014.298:469): pid=5808 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:54.304602 systemd[1]: sshd@22-172.31.25.147:22-139.178.89.65:47528.service: Deactivated successfully. Jul 2 07:06:54.305829 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 07:06:54.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.25.147:22-139.178.89.65:47528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:54.307499 systemd-logind[1877]: Session 23 logged out. Waiting for processes to exit. Jul 2 07:06:54.308797 systemd-logind[1877]: Removed session 23. Jul 2 07:06:59.333148 systemd[1]: Started sshd@23-172.31.25.147:22-139.178.89.65:52414.service - OpenSSH per-connection server daemon (139.178.89.65:52414). Jul 2 07:06:59.336975 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:06:59.337053 kernel: audit: type=1130 audit(1719904019.331:471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.25.147:22-139.178.89.65:52414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:59.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.25.147:22-139.178.89.65:52414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:59.500000 audit[5822]: USER_ACCT pid=5822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:59.502155 sshd[5822]: Accepted publickey for core from 139.178.89.65 port 52414 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:06:59.504584 kernel: audit: type=1101 audit(1719904019.500:472): pid=5822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:59.504000 audit[5822]: CRED_ACQ pid=5822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:59.506338 sshd[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:06:59.510375 kernel: audit: type=1103 audit(1719904019.504:473): pid=5822 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:59.510475 kernel: audit: type=1006 audit(1719904019.504:474): pid=5822 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 2 07:06:59.510507 kernel: audit: type=1300 audit(1719904019.504:474): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff6436360 a2=3 a3=7f44304c5480 items=0 ppid=1 pid=5822 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:59.504000 audit[5822]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff6436360 a2=3 a3=7f44304c5480 items=0 ppid=1 pid=5822 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:06:59.504000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:59.514224 kernel: audit: type=1327 audit(1719904019.504:474): proctitle=737368643A20636F7265205B707269765D Jul 2 07:06:59.519047 systemd-logind[1877]: New session 24 of user core. Jul 2 07:06:59.521961 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 07:06:59.527000 audit[5822]: USER_START pid=5822 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:59.532000 audit[5825]: CRED_ACQ pid=5825 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:59.535997 kernel: audit: type=1105 audit(1719904019.527:475): pid=5822 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:59.536060 kernel: audit: type=1103 audit(1719904019.532:476): pid=5825 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:59.724582 sshd[5822]: pam_unix(sshd:session): session closed for user core Jul 2 07:06:59.724000 audit[5822]: USER_END pid=5822 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:59.729606 kernel: audit: type=1106 audit(1719904019.724:477): pid=5822 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:59.726000 audit[5822]: CRED_DISP pid=5822 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:59.742775 systemd-logind[1877]: Session 24 logged out. Waiting for processes to exit. Jul 2 07:06:59.743604 kernel: audit: type=1104 audit(1719904019.726:478): pid=5822 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:06:59.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.25.147:22-139.178.89.65:52414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:06:59.745032 systemd[1]: sshd@23-172.31.25.147:22-139.178.89.65:52414.service: Deactivated successfully. Jul 2 07:06:59.746334 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 07:06:59.758430 systemd-logind[1877]: Removed session 24. Jul 2 07:07:04.764919 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:07:04.765075 kernel: audit: type=1130 audit(1719904024.756:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.25.147:22-139.178.89.65:52420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:07:04.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.25.147:22-139.178.89.65:52420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:07:04.758177 systemd[1]: Started sshd@24-172.31.25.147:22-139.178.89.65:52420.service - OpenSSH per-connection server daemon (139.178.89.65:52420). Jul 2 07:07:04.953000 audit[5840]: USER_ACCT pid=5840 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:04.957350 sshd[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:04.974099 kernel: audit: type=1101 audit(1719904024.953:481): pid=5840 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:04.974176 kernel: audit: type=1103 audit(1719904024.953:482): pid=5840 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:04.974211 kernel: audit: type=1006 audit(1719904024.953:483): pid=5840 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jul 2 07:07:04.974239 kernel: audit: type=1300 audit(1719904024.953:483): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdef0103f0 a2=3 a3=7f21c4a16480 items=0 ppid=1 pid=5840 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:07:04.974279 kernel: audit: type=1327 audit(1719904024.953:483): proctitle=737368643A20636F7265205B707269765D Jul 2 07:07:04.953000 audit[5840]: CRED_ACQ pid=5840 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:04.953000 audit[5840]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdef0103f0 a2=3 a3=7f21c4a16480 items=0 ppid=1 pid=5840 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:07:04.953000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:07:04.974978 sshd[5840]: Accepted publickey for core from 139.178.89.65 port 52420 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:07:04.975195 systemd-logind[1877]: New session 25 of user core. Jul 2 07:07:04.978635 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 07:07:05.021000 audit[5840]: USER_START pid=5840 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:05.021000 audit[5843]: CRED_ACQ pid=5843 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:05.033033 kernel: audit: type=1105 audit(1719904025.021:484): pid=5840 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:05.033117 kernel: audit: type=1103 audit(1719904025.021:485): pid=5843 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:05.285966 sshd[5840]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:05.285000 audit[5840]: USER_END pid=5840 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:05.290573 kernel: audit: type=1106 audit(1719904025.285:486): pid=5840 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:05.290672 kernel: audit: type=1104 audit(1719904025.287:487): pid=5840 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:05.287000 audit[5840]: CRED_DISP pid=5840 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:05.292226 systemd-logind[1877]: Session 25 logged out. Waiting for processes to exit. Jul 2 07:07:05.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.25.147:22-139.178.89.65:52420 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:07:05.293573 systemd[1]: sshd@24-172.31.25.147:22-139.178.89.65:52420.service: Deactivated successfully. Jul 2 07:07:05.294795 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 07:07:05.298373 systemd-logind[1877]: Removed session 25. Jul 2 07:07:10.321246 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:07:10.321333 kernel: audit: type=1130 audit(1719904030.316:489): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.25.147:22-139.178.89.65:52478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:07:10.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.25.147:22-139.178.89.65:52478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:07:10.318086 systemd[1]: Started sshd@25-172.31.25.147:22-139.178.89.65:52478.service - OpenSSH per-connection server daemon (139.178.89.65:52478). Jul 2 07:07:10.507000 audit[5878]: USER_ACCT pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:10.508948 sshd[5878]: Accepted publickey for core from 139.178.89.65 port 52478 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:07:10.507000 audit[5878]: CRED_ACQ pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:10.514233 sshd[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:10.515871 kernel: audit: type=1101 audit(1719904030.507:490): pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:10.515988 kernel: audit: type=1103 audit(1719904030.507:491): pid=5878 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:10.516634 kernel: audit: type=1006 audit(1719904030.507:492): pid=5878 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jul 2 07:07:10.518589 kernel: audit: type=1300 audit(1719904030.507:492): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5b288a70 a2=3 a3=7f505c210480 items=0 ppid=1 pid=5878 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:07:10.507000 audit[5878]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5b288a70 a2=3 a3=7f505c210480 items=0 ppid=1 pid=5878 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:07:10.507000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:07:10.522698 kernel: audit: type=1327 audit(1719904030.507:492): proctitle=737368643A20636F7265205B707269765D Jul 2 07:07:10.549616 systemd-logind[1877]: New session 26 of user core. Jul 2 07:07:10.553937 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 07:07:10.583000 audit[5878]: USER_START pid=5878 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:10.589573 kernel: audit: type=1105 audit(1719904030.583:493): pid=5878 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:10.589728 kernel: audit: type=1103 audit(1719904030.587:494): pid=5882 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:10.587000 audit[5882]: CRED_ACQ pid=5882 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:10.848070 sshd[5878]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:10.858842 kernel: audit: type=1106 audit(1719904030.848:495): pid=5878 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:10.859027 kernel: audit: type=1104 audit(1719904030.848:496): pid=5878 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:10.848000 audit[5878]: USER_END pid=5878 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:10.848000 audit[5878]: CRED_DISP pid=5878 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:10.857010 systemd[1]: sshd@25-172.31.25.147:22-139.178.89.65:52478.service: Deactivated successfully. Jul 2 07:07:10.859946 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 07:07:10.862299 systemd-logind[1877]: Session 26 logged out. Waiting for processes to exit. Jul 2 07:07:10.863768 systemd-logind[1877]: Removed session 26. Jul 2 07:07:10.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.25.147:22-139.178.89.65:52478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:07:15.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.25.147:22-139.178.89.65:52494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:07:15.882706 systemd[1]: Started sshd@26-172.31.25.147:22-139.178.89.65:52494.service - OpenSSH per-connection server daemon (139.178.89.65:52494). Jul 2 07:07:15.886542 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 07:07:15.888023 kernel: audit: type=1130 audit(1719904035.881:498): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.25.147:22-139.178.89.65:52494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:07:16.047000 audit[5922]: USER_ACCT pid=5922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:16.049425 sshd[5922]: Accepted publickey for core from 139.178.89.65 port 52494 ssh2: RSA SHA256:Frae9zInzdHkfeUg1oRnCiPHXrZNT4iSeSbXGwnL5bY Jul 2 07:07:16.051000 audit[5922]: CRED_ACQ pid=5922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:16.053916 sshd[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:16.056956 kernel: audit: type=1101 audit(1719904036.047:499): pid=5922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:16.057041 kernel: audit: type=1103 audit(1719904036.051:500): pid=5922 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:16.057075 kernel: audit: type=1006 audit(1719904036.051:501): pid=5922 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jul 2 07:07:16.051000 audit[5922]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea6b76c90 a2=3 a3=7efc0ac45480 items=0 ppid=1 pid=5922 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:07:16.062109 kernel: audit: type=1300 audit(1719904036.051:501): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea6b76c90 a2=3 a3=7efc0ac45480 items=0 ppid=1 pid=5922 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:07:16.062192 kernel: audit: type=1327 audit(1719904036.051:501): proctitle=737368643A20636F7265205B707269765D Jul 2 07:07:16.051000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 07:07:16.064731 systemd-logind[1877]: New session 27 of user core. Jul 2 07:07:16.073949 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 07:07:16.086000 audit[5922]: USER_START pid=5922 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:16.087000 audit[5926]: CRED_ACQ pid=5926 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:16.095616 kernel: audit: type=1105 audit(1719904036.086:502): pid=5922 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:16.095878 kernel: audit: type=1103 audit(1719904036.087:503): pid=5926 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:16.318752 sshd[5922]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:16.320000 audit[5922]: USER_END pid=5922 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:16.320000 audit[5922]: CRED_DISP pid=5922 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:16.330779 kernel: audit: type=1106 audit(1719904036.320:504): pid=5922 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:16.330895 kernel: audit: type=1104 audit(1719904036.320:505): pid=5922 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Jul 2 07:07:16.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.25.147:22-139.178.89.65:52494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:07:16.328545 systemd[1]: sshd@26-172.31.25.147:22-139.178.89.65:52494.service: Deactivated successfully. Jul 2 07:07:16.329852 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 07:07:16.331806 systemd-logind[1877]: Session 27 logged out. Waiting for processes to exit. Jul 2 07:07:16.333846 systemd-logind[1877]: Removed session 27. Jul 2 07:07:25.833846 systemd[1]: run-containerd-runc-k8s.io-eeabeae56c06cd36e75a03fd512ffe6d20c5c11a428427a00ad9f668aabcf8cf-runc.rajwdp.mount: Deactivated successfully. Jul 2 07:07:30.690202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24237807b590d8f99ffde70f9c8cf21e1e3694ae51cf79c0d2e208c2ae01d8c6-rootfs.mount: Deactivated successfully. Jul 2 07:07:30.711924 containerd[1899]: time="2024-07-02T07:07:30.684964620Z" level=info msg="shim disconnected" id=24237807b590d8f99ffde70f9c8cf21e1e3694ae51cf79c0d2e208c2ae01d8c6 namespace=k8s.io Jul 2 07:07:30.712725 containerd[1899]: time="2024-07-02T07:07:30.711927243Z" level=warning msg="cleaning up after shim disconnected" id=24237807b590d8f99ffde70f9c8cf21e1e3694ae51cf79c0d2e208c2ae01d8c6 namespace=k8s.io Jul 2 07:07:30.712725 containerd[1899]: time="2024-07-02T07:07:30.711951562Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 07:07:30.938602 kubelet[3060]: I0702 07:07:30.938547 3060 scope.go:117] "RemoveContainer" containerID="24237807b590d8f99ffde70f9c8cf21e1e3694ae51cf79c0d2e208c2ae01d8c6" Jul 2 07:07:30.952644 containerd[1899]: time="2024-07-02T07:07:30.952117714Z" level=info msg="CreateContainer within sandbox \"82fcf12a0ffc9d6b8ccd462c205220d1aa544d89ba4dfe6cdce34a89df7b4d99\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 07:07:31.012974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2639185068.mount: Deactivated successfully. Jul 2 07:07:31.034509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310756366.mount: Deactivated successfully. Jul 2 07:07:31.034757 containerd[1899]: time="2024-07-02T07:07:31.034500450Z" level=info msg="CreateContainer within sandbox \"82fcf12a0ffc9d6b8ccd462c205220d1aa544d89ba4dfe6cdce34a89df7b4d99\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"13f7b6e84f7ce34962407be69860111226b8126fd4197e880b1aedd823c9eecc\"" Jul 2 07:07:31.035519 containerd[1899]: time="2024-07-02T07:07:31.035484936Z" level=info msg="StartContainer for \"13f7b6e84f7ce34962407be69860111226b8126fd4197e880b1aedd823c9eecc\"" Jul 2 07:07:31.155267 containerd[1899]: time="2024-07-02T07:07:31.155204649Z" level=info msg="StartContainer for \"13f7b6e84f7ce34962407be69860111226b8126fd4197e880b1aedd823c9eecc\" returns successfully" Jul 2 07:07:31.889102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e211251ab2cc66540c09d6083cd2dac7495dd039507b01a503ee0e048059470-rootfs.mount: Deactivated successfully. Jul 2 07:07:31.892038 containerd[1899]: time="2024-07-02T07:07:31.891970241Z" level=info msg="shim disconnected" id=8e211251ab2cc66540c09d6083cd2dac7495dd039507b01a503ee0e048059470 namespace=k8s.io Jul 2 07:07:31.892038 containerd[1899]: time="2024-07-02T07:07:31.892036412Z" level=warning msg="cleaning up after shim disconnected" id=8e211251ab2cc66540c09d6083cd2dac7495dd039507b01a503ee0e048059470 namespace=k8s.io Jul 2 07:07:31.892542 containerd[1899]: time="2024-07-02T07:07:31.892048116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 07:07:31.951838 kubelet[3060]: I0702 07:07:31.951809 3060 scope.go:117] "RemoveContainer" containerID="8e211251ab2cc66540c09d6083cd2dac7495dd039507b01a503ee0e048059470" Jul 2 07:07:31.955838 containerd[1899]: time="2024-07-02T07:07:31.955793739Z" level=info msg="CreateContainer within sandbox \"e21fadad186d46978d41d2462aafca2a058cf1416645c4994a990b81ef7174e8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 2 07:07:32.001329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1367156324.mount: Deactivated successfully. Jul 2 07:07:32.012983 containerd[1899]: time="2024-07-02T07:07:32.012918657Z" level=info msg="CreateContainer within sandbox \"e21fadad186d46978d41d2462aafca2a058cf1416645c4994a990b81ef7174e8\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"429755bb590998773c39449ace453802f67fc84b4da143303b9546a899c644d3\"" Jul 2 07:07:32.015213 containerd[1899]: time="2024-07-02T07:07:32.015173120Z" level=info msg="StartContainer for \"429755bb590998773c39449ace453802f67fc84b4da143303b9546a899c644d3\"" Jul 2 07:07:32.128327 containerd[1899]: time="2024-07-02T07:07:32.128281233Z" level=info msg="StartContainer for \"429755bb590998773c39449ace453802f67fc84b4da143303b9546a899c644d3\" returns successfully" Jul 2 07:07:32.778710 kubelet[3060]: E0702 07:07:32.778672 3060 controller.go:193] "Failed to update lease" err="Put \"https://172.31.25.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-147?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 07:07:34.944314 containerd[1899]: time="2024-07-02T07:07:34.944249445Z" level=info msg="shim disconnected" id=0d7d725e904c8ff40d90b7c2bea8c011102cb6a20d3f12ace6da97543c1a802a namespace=k8s.io Jul 2 07:07:34.945088 containerd[1899]: time="2024-07-02T07:07:34.945059081Z" level=warning msg="cleaning up after shim disconnected" id=0d7d725e904c8ff40d90b7c2bea8c011102cb6a20d3f12ace6da97543c1a802a namespace=k8s.io Jul 2 07:07:34.945204 containerd[1899]: time="2024-07-02T07:07:34.945184457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 07:07:34.949504 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d7d725e904c8ff40d90b7c2bea8c011102cb6a20d3f12ace6da97543c1a802a-rootfs.mount: Deactivated successfully. Jul 2 07:07:35.966146 kubelet[3060]: I0702 07:07:35.966112 3060 scope.go:117] "RemoveContainer" containerID="0d7d725e904c8ff40d90b7c2bea8c011102cb6a20d3f12ace6da97543c1a802a" Jul 2 07:07:35.969136 containerd[1899]: time="2024-07-02T07:07:35.969094813Z" level=info msg="CreateContainer within sandbox \"6070298d9ab1ba4985f3d4ec5d46cbe1f54e885c611a57f241922b7a1f45adb9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 07:07:36.000357 containerd[1899]: time="2024-07-02T07:07:36.000300456Z" level=info msg="CreateContainer within sandbox \"6070298d9ab1ba4985f3d4ec5d46cbe1f54e885c611a57f241922b7a1f45adb9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c71d8d52bab6a18f20bdca0f9b45ebd7cdf9d3ecb884c10a9421efa23c8099a1\"" Jul 2 07:07:36.000985 containerd[1899]: time="2024-07-02T07:07:36.000885647Z" level=info msg="StartContainer for \"c71d8d52bab6a18f20bdca0f9b45ebd7cdf9d3ecb884c10a9421efa23c8099a1\"" Jul 2 07:07:36.110271 systemd[1]: run-containerd-runc-k8s.io-c71d8d52bab6a18f20bdca0f9b45ebd7cdf9d3ecb884c10a9421efa23c8099a1-runc.bEeaIJ.mount: Deactivated successfully. Jul 2 07:07:36.167191 containerd[1899]: time="2024-07-02T07:07:36.166922857Z" level=info msg="StartContainer for \"c71d8d52bab6a18f20bdca0f9b45ebd7cdf9d3ecb884c10a9421efa23c8099a1\" returns successfully" Jul 2 07:07:40.619613 systemd[1]: run-containerd-runc-k8s.io-eeabeae56c06cd36e75a03fd512ffe6d20c5c11a428427a00ad9f668aabcf8cf-runc.30R5E6.mount: Deactivated successfully. Jul 2 07:07:42.779719 kubelet[3060]: E0702 07:07:42.779686 3060 controller.go:193] "Failed to update lease" err="Put \"https://172.31.25.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-25-147?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"